00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v22.11" build number 87 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3265 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.090 Fetching changes from the remote Git repository 00:00:00.092 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.129 Using shallow fetch with depth 1 00:00:00.129 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.129 > git --version # timeout=10 00:00:00.170 > git --version # 'git version 2.39.2' 00:00:00.170 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.201 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.201 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.440 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.451 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.463 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.463 > git config core.sparsecheckout # timeout=10 00:00:03.473 > git read-tree -mu HEAD # timeout=10 00:00:03.490 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.509 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.509 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.593 [Pipeline] Start of Pipeline 00:00:03.603 [Pipeline] library 00:00:03.605 Loading library shm_lib@master 00:00:03.605 Library shm_lib@master is cached. Copying from home. 00:00:03.621 [Pipeline] node 00:00:03.628 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.630 [Pipeline] { 00:00:03.638 [Pipeline] catchError 00:00:03.639 [Pipeline] { 00:00:03.651 [Pipeline] wrap 00:00:03.659 [Pipeline] { 00:00:03.667 [Pipeline] stage 00:00:03.669 [Pipeline] { (Prologue) 00:00:03.831 [Pipeline] sh 00:00:04.139 + logger -p user.info -t JENKINS-CI 00:00:04.157 [Pipeline] echo 00:00:04.159 Node: GP11 00:00:04.166 [Pipeline] sh 00:00:04.464 [Pipeline] setCustomBuildProperty 00:00:04.473 [Pipeline] echo 00:00:04.474 Cleanup processes 00:00:04.479 [Pipeline] sh 00:00:04.759 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.759 2955135 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.771 [Pipeline] sh 00:00:05.055 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.055 ++ grep -v 'sudo pgrep' 00:00:05.055 ++ awk '{print $1}' 00:00:05.055 + sudo kill -9 00:00:05.055 + true 00:00:05.069 [Pipeline] cleanWs 00:00:05.078 [WS-CLEANUP] Deleting project workspace... 00:00:05.078 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.084 [WS-CLEANUP] done 00:00:05.087 [Pipeline] setCustomBuildProperty 00:00:05.097 [Pipeline] sh 00:00:05.390 + sudo git config --global --replace-all safe.directory '*' 00:00:05.457 [Pipeline] httpRequest 00:00:05.496 [Pipeline] echo 00:00:05.497 Sorcerer 10.211.164.101 is alive 00:00:05.503 [Pipeline] httpRequest 00:00:05.506 HttpMethod: GET 00:00:05.507 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.507 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.528 Response Code: HTTP/1.1 200 OK 00:00:05.528 Success: Status code 200 is in the accepted range: 200,404 00:00:05.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.715 [Pipeline] sh 00:00:10.998 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.015 [Pipeline] httpRequest 00:00:11.041 [Pipeline] echo 00:00:11.043 Sorcerer 10.211.164.101 is alive 00:00:11.052 [Pipeline] httpRequest 00:00:11.058 HttpMethod: GET 00:00:11.059 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.059 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:11.060 Response Code: HTTP/1.1 200 OK 00:00:11.061 Success: Status code 200 is in the accepted range: 200,404 00:00:11.061 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:28.149 [Pipeline] sh 00:00:28.434 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:31.732 [Pipeline] sh 00:00:32.013 + git -C spdk log --oneline -n5 00:00:32.013 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:32.013 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:32.013 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:32.013 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:32.013 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:32.031 [Pipeline] withCredentials 00:00:32.041 > git --version # timeout=10 00:00:32.053 > git --version # 'git version 2.39.2' 00:00:32.071 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:32.073 [Pipeline] { 00:00:32.082 [Pipeline] retry 00:00:32.085 [Pipeline] { 00:00:32.102 [Pipeline] sh 00:00:32.386 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:32.397 [Pipeline] } 00:00:32.419 [Pipeline] // retry 00:00:32.425 [Pipeline] } 00:00:32.445 [Pipeline] // withCredentials 00:00:32.455 [Pipeline] httpRequest 00:00:32.485 [Pipeline] echo 00:00:32.486 Sorcerer 10.211.164.101 is alive 00:00:32.495 [Pipeline] httpRequest 00:00:32.500 HttpMethod: GET 00:00:32.500 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:32.501 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:32.529 Response Code: HTTP/1.1 200 OK 00:00:32.530 Success: Status code 200 is in the accepted range: 200,404 00:00:32.530 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:23.190 [Pipeline] sh 00:01:23.474 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:25.430 [Pipeline] sh 00:01:25.711 + git -C dpdk log --oneline -n5 00:01:25.711 caf0f5d395 version: 22.11.4 00:01:25.711 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:25.711 dc9c799c7d vhost: fix missing spinlock unlock 00:01:25.711 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:25.711 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:25.721 [Pipeline] } 00:01:25.736 [Pipeline] // stage 00:01:25.744 [Pipeline] stage 00:01:25.746 [Pipeline] { (Prepare) 00:01:25.766 [Pipeline] writeFile 00:01:25.781 [Pipeline] sh 00:01:26.064 + logger -p user.info -t JENKINS-CI 00:01:26.076 [Pipeline] sh 00:01:26.359 + logger -p user.info -t JENKINS-CI 00:01:26.371 [Pipeline] sh 00:01:26.659 + cat autorun-spdk.conf 00:01:26.659 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.659 SPDK_TEST_NVMF=1 00:01:26.659 SPDK_TEST_NVME_CLI=1 00:01:26.659 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:26.659 SPDK_TEST_NVMF_NICS=e810 00:01:26.659 SPDK_TEST_VFIOUSER=1 00:01:26.659 SPDK_RUN_UBSAN=1 00:01:26.659 NET_TYPE=phy 00:01:26.659 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:26.659 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:26.675 RUN_NIGHTLY=1 00:01:26.679 [Pipeline] readFile 00:01:26.707 [Pipeline] withEnv 00:01:26.709 [Pipeline] { 00:01:26.721 [Pipeline] sh 00:01:27.004 + set -ex 00:01:27.004 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.004 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.004 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.004 ++ SPDK_TEST_NVMF=1 00:01:27.004 ++ SPDK_TEST_NVME_CLI=1 00:01:27.004 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.004 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.004 ++ SPDK_TEST_VFIOUSER=1 00:01:27.004 ++ SPDK_RUN_UBSAN=1 00:01:27.004 ++ NET_TYPE=phy 00:01:27.004 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:27.004 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.004 ++ RUN_NIGHTLY=1 00:01:27.004 + case $SPDK_TEST_NVMF_NICS in 00:01:27.004 + DRIVERS=ice 00:01:27.004 + [[ tcp == \r\d\m\a ]] 00:01:27.004 + [[ -n ice ]] 00:01:27.004 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.004 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.004 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.004 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.004 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.004 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.004 + true 00:01:27.004 + for D in $DRIVERS 00:01:27.004 + sudo modprobe ice 00:01:27.004 + exit 0 00:01:27.013 [Pipeline] } 00:01:27.030 [Pipeline] // withEnv 00:01:27.035 [Pipeline] } 00:01:27.051 [Pipeline] // stage 00:01:27.061 [Pipeline] catchError 00:01:27.062 [Pipeline] { 00:01:27.079 [Pipeline] timeout 00:01:27.079 Timeout set to expire in 50 min 00:01:27.081 [Pipeline] { 00:01:27.098 [Pipeline] stage 00:01:27.100 [Pipeline] { (Tests) 00:01:27.113 [Pipeline] sh 00:01:27.398 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.398 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.398 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.398 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:27.398 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.398 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:27.398 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.398 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:27.398 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:27.398 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:27.398 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:27.398 + source /etc/os-release 00:01:27.398 ++ NAME='Fedora Linux' 00:01:27.398 ++ VERSION='38 (Cloud Edition)' 00:01:27.398 ++ ID=fedora 00:01:27.398 ++ VERSION_ID=38 00:01:27.398 ++ VERSION_CODENAME= 00:01:27.398 ++ PLATFORM_ID=platform:f38 00:01:27.398 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:27.398 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:27.398 ++ LOGO=fedora-logo-icon 00:01:27.398 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:27.398 ++ HOME_URL=https://fedoraproject.org/ 00:01:27.398 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:27.398 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:27.398 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:27.398 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:27.398 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:27.398 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:27.398 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:27.398 ++ SUPPORT_END=2024-05-14 00:01:27.398 ++ VARIANT='Cloud Edition' 00:01:27.398 ++ VARIANT_ID=cloud 00:01:27.398 + uname -a 00:01:27.398 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:27.398 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:28.338 Hugepages 00:01:28.338 node hugesize free / total 00:01:28.338 node0 1048576kB 0 / 0 00:01:28.338 node0 2048kB 0 / 0 00:01:28.338 node1 1048576kB 0 / 0 00:01:28.338 node1 2048kB 0 / 0 00:01:28.338 00:01:28.338 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:28.338 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:28.338 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:28.338 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:28.338 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:28.338 + rm -f /tmp/spdk-ld-path 00:01:28.338 + source autorun-spdk.conf 00:01:28.338 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.338 ++ SPDK_TEST_NVMF=1 00:01:28.338 ++ SPDK_TEST_NVME_CLI=1 00:01:28.338 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.338 ++ SPDK_TEST_NVMF_NICS=e810 00:01:28.338 ++ SPDK_TEST_VFIOUSER=1 00:01:28.338 ++ SPDK_RUN_UBSAN=1 00:01:28.338 ++ NET_TYPE=phy 00:01:28.338 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.338 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.338 ++ RUN_NIGHTLY=1 00:01:28.338 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:28.338 + [[ -n '' ]] 00:01:28.338 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.338 + for M in /var/spdk/build-*-manifest.txt 00:01:28.338 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:28.338 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.338 + for M in /var/spdk/build-*-manifest.txt 00:01:28.338 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:28.338 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:28.338 ++ uname 00:01:28.338 + [[ Linux == \L\i\n\u\x ]] 00:01:28.338 + sudo dmesg -T 00:01:28.338 + sudo dmesg --clear 00:01:28.338 + dmesg_pid=2956472 00:01:28.338 + [[ Fedora Linux == FreeBSD ]] 00:01:28.338 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.338 + sudo dmesg -Tw 00:01:28.338 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:28.338 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:28.338 + [[ -x /usr/src/fio-static/fio ]] 00:01:28.338 + export FIO_BIN=/usr/src/fio-static/fio 00:01:28.338 + FIO_BIN=/usr/src/fio-static/fio 00:01:28.338 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:28.338 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:28.338 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:28.338 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.338 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:28.338 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:28.338 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.338 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:28.338 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:28.338 Test configuration: 00:01:28.598 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:28.598 SPDK_TEST_NVMF=1 00:01:28.598 SPDK_TEST_NVME_CLI=1 00:01:28.598 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:28.598 SPDK_TEST_NVMF_NICS=e810 00:01:28.598 SPDK_TEST_VFIOUSER=1 00:01:28.598 SPDK_RUN_UBSAN=1 00:01:28.598 NET_TYPE=phy 00:01:28.598 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:28.598 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.598 RUN_NIGHTLY=1 19:49:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:28.598 19:49:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:28.598 19:49:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:28.598 19:49:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:28.598 19:49:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.598 19:49:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.598 19:49:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.598 19:49:16 -- paths/export.sh@5 -- $ export PATH 00:01:28.598 19:49:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:28.598 19:49:16 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:28.598 19:49:16 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:28.598 19:49:16 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720892956.XXXXXX 00:01:28.598 19:49:16 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720892956.DpQfJy 00:01:28.598 19:49:16 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:28.598 19:49:16 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:01:28.598 19:49:16 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:28.598 19:49:16 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:28.598 19:49:16 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:28.598 19:49:16 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:28.598 19:49:16 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:28.598 19:49:16 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:28.598 19:49:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.598 19:49:16 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:28.598 19:49:16 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:28.598 19:49:16 -- pm/common@17 -- $ local monitor 00:01:28.598 19:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.598 19:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.598 19:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.598 19:49:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:28.598 19:49:16 -- pm/common@21 -- $ date +%s 00:01:28.598 19:49:16 -- pm/common@21 -- $ date +%s 00:01:28.598 19:49:16 -- pm/common@25 -- $ sleep 1 00:01:28.598 19:49:16 -- pm/common@21 -- $ date +%s 00:01:28.598 19:49:16 -- pm/common@21 -- $ date +%s 00:01:28.598 19:49:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892956 00:01:28.598 19:49:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892956 00:01:28.598 19:49:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892956 00:01:28.598 19:49:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720892956 00:01:28.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892956_collect-vmstat.pm.log 00:01:28.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892956_collect-cpu-temp.pm.log 00:01:28.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892956_collect-cpu-load.pm.log 00:01:28.598 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720892956_collect-bmc-pm.bmc.pm.log 00:01:29.536 19:49:17 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:29.536 19:49:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:29.536 19:49:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:29.536 19:49:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.536 19:49:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:29.536 Sat Jul 13 05:49:17 PM UTC 2024 00:01:29.536 19:49:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:29.536 v24.05-13-g5fa2f5086 00:01:29.536 19:49:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:29.536 19:49:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:29.536 19:49:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:29.536 19:49:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:29.536 19:49:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.536 19:49:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.536 ************************************ 00:01:29.536 START TEST ubsan 00:01:29.536 ************************************ 00:01:29.536 19:49:17 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:29.536 using ubsan 00:01:29.536 00:01:29.536 real 0m0.000s 00:01:29.536 user 0m0.000s 00:01:29.536 sys 0m0.000s 00:01:29.536 19:49:17 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:29.536 19:49:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:29.536 ************************************ 00:01:29.536 END TEST ubsan 00:01:29.536 ************************************ 00:01:29.536 19:49:17 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:29.536 19:49:17 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:29.536 19:49:17 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:29.536 19:49:17 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:29.536 19:49:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:29.536 19:49:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.536 ************************************ 00:01:29.536 START TEST build_native_dpdk 00:01:29.536 ************************************ 00:01:29.536 19:49:17 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:29.536 caf0f5d395 version: 22.11.4 00:01:29.536 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:29.536 dc9c799c7d vhost: fix missing spinlock unlock 00:01:29.536 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:29.536 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:29.536 19:49:17 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:29.536 patching file config/rte_config.h 00:01:29.536 Hunk #1 succeeded at 60 (offset 1 line). 00:01:29.536 19:49:17 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:29.537 19:49:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:29.537 19:49:17 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:29.537 19:49:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:29.537 19:49:17 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:33.724 The Meson build system 00:01:33.724 Version: 1.3.1 00:01:33.724 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:33.724 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:33.724 Build type: native build 00:01:33.724 Program cat found: YES (/usr/bin/cat) 00:01:33.724 Project name: DPDK 00:01:33.724 Project version: 22.11.4 00:01:33.724 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:33.724 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:33.724 Host machine cpu family: x86_64 00:01:33.724 Host machine cpu: x86_64 00:01:33.724 Message: ## Building in Developer Mode ## 00:01:33.724 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:33.724 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:33.724 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:33.724 Program objdump found: YES (/usr/bin/objdump) 00:01:33.724 Program python3 found: YES (/usr/bin/python3) 00:01:33.724 Program cat found: YES (/usr/bin/cat) 00:01:33.724 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:33.724 Checking for size of "void *" : 8 00:01:33.724 Checking for size of "void *" : 8 (cached) 00:01:33.724 Library m found: YES 00:01:33.724 Library numa found: YES 00:01:33.724 Has header "numaif.h" : YES 00:01:33.724 Library fdt found: NO 00:01:33.724 Library execinfo found: NO 00:01:33.724 Has header "execinfo.h" : YES 00:01:33.724 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:33.724 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:33.724 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:33.724 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:33.724 Run-time dependency openssl found: YES 3.0.9 00:01:33.724 Run-time dependency libpcap found: YES 1.10.4 00:01:33.724 Has header "pcap.h" with dependency libpcap: YES 00:01:33.724 Compiler for C supports arguments -Wcast-qual: YES 00:01:33.724 Compiler for C supports arguments -Wdeprecated: YES 00:01:33.724 Compiler for C supports arguments -Wformat: YES 00:01:33.724 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:33.724 Compiler for C supports arguments -Wformat-security: NO 00:01:33.724 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:33.724 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:33.724 Compiler for C supports arguments -Wnested-externs: YES 00:01:33.724 Compiler for C supports arguments -Wold-style-definition: YES 00:01:33.724 Compiler for C supports arguments -Wpointer-arith: YES 00:01:33.724 Compiler for C supports arguments -Wsign-compare: YES 00:01:33.724 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:33.724 Compiler for C supports arguments -Wundef: YES 00:01:33.724 Compiler for C supports arguments -Wwrite-strings: YES 00:01:33.724 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:33.724 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:33.724 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:33.724 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:33.724 Compiler for C supports arguments -mavx512f: YES 00:01:33.724 Checking if "AVX512 checking" compiles: YES 00:01:33.724 Fetching value of define "__SSE4_2__" : 1 00:01:33.724 Fetching value of define "__AES__" : 1 00:01:33.724 Fetching value of define "__AVX__" : 1 00:01:33.724 Fetching value of define "__AVX2__" : (undefined) 00:01:33.724 Fetching value of define "__AVX512BW__" : (undefined) 00:01:33.724 Fetching value of define "__AVX512CD__" : (undefined) 00:01:33.724 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:33.724 Fetching value of define "__AVX512F__" : (undefined) 00:01:33.724 Fetching value of define "__AVX512VL__" : (undefined) 00:01:33.724 Fetching value of define "__PCLMUL__" : 1 00:01:33.724 Fetching value of define "__RDRND__" : 1 00:01:33.724 Fetching value of define "__RDSEED__" : (undefined) 00:01:33.724 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:33.724 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:33.724 Message: lib/kvargs: Defining dependency "kvargs" 00:01:33.724 Message: lib/telemetry: Defining dependency "telemetry" 00:01:33.724 Checking for function "getentropy" : YES 00:01:33.724 Message: lib/eal: Defining dependency "eal" 00:01:33.724 Message: lib/ring: Defining dependency "ring" 00:01:33.724 Message: lib/rcu: Defining dependency "rcu" 00:01:33.724 Message: lib/mempool: Defining dependency "mempool" 00:01:33.724 Message: lib/mbuf: Defining dependency "mbuf" 00:01:33.724 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:33.724 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.724 Compiler for C supports arguments -mpclmul: YES 00:01:33.724 Compiler for C supports arguments -maes: YES 00:01:33.724 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:33.724 Compiler for C supports arguments -mavx512bw: YES 00:01:33.724 Compiler for C supports arguments -mavx512dq: YES 00:01:33.724 Compiler for C supports arguments -mavx512vl: YES 00:01:33.724 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:33.724 Compiler for C supports arguments -mavx2: YES 00:01:33.724 Compiler for C supports arguments -mavx: YES 00:01:33.724 Message: lib/net: Defining dependency "net" 00:01:33.724 Message: lib/meter: Defining dependency "meter" 00:01:33.724 Message: lib/ethdev: Defining dependency "ethdev" 00:01:33.724 Message: lib/pci: Defining dependency "pci" 00:01:33.724 Message: lib/cmdline: Defining dependency "cmdline" 00:01:33.724 Message: lib/metrics: Defining dependency "metrics" 00:01:33.724 Message: lib/hash: Defining dependency "hash" 00:01:33.724 Message: lib/timer: Defining dependency "timer" 00:01:33.724 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:33.724 Compiler for C supports arguments -mavx2: YES (cached) 00:01:33.724 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.724 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:33.724 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:33.724 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:33.724 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:33.724 Message: lib/acl: Defining dependency "acl" 00:01:33.724 Message: lib/bbdev: Defining dependency "bbdev" 00:01:33.724 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:33.724 Run-time dependency libelf found: YES 0.190 00:01:33.724 Message: lib/bpf: Defining dependency "bpf" 00:01:33.724 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:33.724 Message: lib/compressdev: Defining dependency "compressdev" 00:01:33.724 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:33.724 Message: lib/distributor: Defining dependency "distributor" 00:01:33.724 Message: lib/efd: Defining dependency "efd" 00:01:33.724 Message: lib/eventdev: Defining dependency "eventdev" 00:01:33.724 Message: lib/gpudev: Defining dependency "gpudev" 00:01:33.724 Message: lib/gro: Defining dependency "gro" 00:01:33.724 Message: lib/gso: Defining dependency "gso" 00:01:33.724 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:33.724 Message: lib/jobstats: Defining dependency "jobstats" 00:01:33.724 Message: lib/latencystats: Defining dependency "latencystats" 00:01:33.724 Message: lib/lpm: Defining dependency "lpm" 00:01:33.724 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.724 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:33.724 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:33.724 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:33.724 Message: lib/member: Defining dependency "member" 00:01:33.724 Message: lib/pcapng: Defining dependency "pcapng" 00:01:33.725 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:33.725 Message: lib/power: Defining dependency "power" 00:01:33.725 Message: lib/rawdev: Defining dependency "rawdev" 00:01:33.725 Message: lib/regexdev: Defining dependency "regexdev" 00:01:33.725 Message: lib/dmadev: Defining dependency "dmadev" 00:01:33.725 Message: lib/rib: Defining dependency "rib" 00:01:33.725 Message: lib/reorder: Defining dependency "reorder" 00:01:33.725 Message: lib/sched: Defining dependency "sched" 00:01:33.725 Message: lib/security: Defining dependency "security" 00:01:33.725 Message: lib/stack: Defining dependency "stack" 00:01:33.725 Has header "linux/userfaultfd.h" : YES 00:01:33.725 Message: lib/vhost: Defining dependency "vhost" 00:01:33.725 Message: lib/ipsec: Defining dependency "ipsec" 00:01:33.725 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:33.725 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:33.725 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:33.725 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:33.725 Message: lib/fib: Defining dependency "fib" 00:01:33.725 Message: lib/port: Defining dependency "port" 00:01:33.725 Message: lib/pdump: Defining dependency "pdump" 00:01:33.725 Message: lib/table: Defining dependency "table" 00:01:33.725 Message: lib/pipeline: Defining dependency "pipeline" 00:01:33.725 Message: lib/graph: Defining dependency "graph" 00:01:33.725 Message: lib/node: Defining dependency "node" 00:01:33.725 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:33.725 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:33.725 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:33.725 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:33.725 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:33.725 Compiler for C supports arguments -Wno-unused-value: YES 00:01:35.105 Compiler for C supports arguments -Wno-format: YES 00:01:35.105 Compiler for C supports arguments -Wno-format-security: YES 00:01:35.105 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:35.105 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:35.105 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:35.105 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:35.105 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:35.105 Compiler for C supports arguments -mavx2: YES (cached) 00:01:35.105 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.105 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.105 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:35.105 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:35.106 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:35.106 Program doxygen found: YES (/usr/bin/doxygen) 00:01:35.106 Configuring doxy-api.conf using configuration 00:01:35.106 Program sphinx-build found: NO 00:01:35.106 Configuring rte_build_config.h using configuration 00:01:35.106 Message: 00:01:35.106 ================= 00:01:35.106 Applications Enabled 00:01:35.106 ================= 00:01:35.106 00:01:35.106 apps: 00:01:35.106 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:35.106 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:35.106 test-security-perf, 00:01:35.106 00:01:35.106 Message: 00:01:35.106 ================= 00:01:35.106 Libraries Enabled 00:01:35.106 ================= 00:01:35.106 00:01:35.106 libs: 00:01:35.106 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:35.106 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:35.106 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:35.106 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:35.106 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:35.106 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:35.106 table, pipeline, graph, node, 00:01:35.106 00:01:35.106 Message: 00:01:35.106 =============== 00:01:35.106 Drivers Enabled 00:01:35.106 =============== 00:01:35.106 00:01:35.106 common: 00:01:35.106 00:01:35.106 bus: 00:01:35.106 pci, vdev, 00:01:35.106 mempool: 00:01:35.106 ring, 00:01:35.106 dma: 00:01:35.106 00:01:35.106 net: 00:01:35.106 i40e, 00:01:35.106 raw: 00:01:35.106 00:01:35.106 crypto: 00:01:35.106 00:01:35.106 compress: 00:01:35.106 00:01:35.106 regex: 00:01:35.106 00:01:35.106 vdpa: 00:01:35.106 00:01:35.106 event: 00:01:35.106 00:01:35.106 baseband: 00:01:35.106 00:01:35.106 gpu: 00:01:35.106 00:01:35.106 00:01:35.106 Message: 00:01:35.106 ================= 00:01:35.106 Content Skipped 00:01:35.106 ================= 00:01:35.106 00:01:35.106 apps: 00:01:35.106 00:01:35.106 libs: 00:01:35.106 kni: explicitly disabled via build config (deprecated lib) 00:01:35.106 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:35.106 00:01:35.106 drivers: 00:01:35.106 common/cpt: not in enabled drivers build config 00:01:35.106 common/dpaax: not in enabled drivers build config 00:01:35.106 common/iavf: not in enabled drivers build config 00:01:35.106 common/idpf: not in enabled drivers build config 00:01:35.106 common/mvep: not in enabled drivers build config 00:01:35.106 common/octeontx: not in enabled drivers build config 00:01:35.106 bus/auxiliary: not in enabled drivers build config 00:01:35.106 bus/dpaa: not in enabled drivers build config 00:01:35.106 bus/fslmc: not in enabled drivers build config 00:01:35.106 bus/ifpga: not in enabled drivers build config 00:01:35.106 bus/vmbus: not in enabled drivers build config 00:01:35.106 common/cnxk: not in enabled drivers build config 00:01:35.106 common/mlx5: not in enabled drivers build config 00:01:35.106 common/qat: not in enabled drivers build config 00:01:35.106 common/sfc_efx: not in enabled drivers build config 00:01:35.106 mempool/bucket: not in enabled drivers build config 00:01:35.106 mempool/cnxk: not in enabled drivers build config 00:01:35.106 mempool/dpaa: not in enabled drivers build config 00:01:35.106 mempool/dpaa2: not in enabled drivers build config 00:01:35.106 mempool/octeontx: not in enabled drivers build config 00:01:35.106 mempool/stack: not in enabled drivers build config 00:01:35.106 dma/cnxk: not in enabled drivers build config 00:01:35.106 dma/dpaa: not in enabled drivers build config 00:01:35.106 dma/dpaa2: not in enabled drivers build config 00:01:35.106 dma/hisilicon: not in enabled drivers build config 00:01:35.106 dma/idxd: not in enabled drivers build config 00:01:35.106 dma/ioat: not in enabled drivers build config 00:01:35.106 dma/skeleton: not in enabled drivers build config 00:01:35.106 net/af_packet: not in enabled drivers build config 00:01:35.106 net/af_xdp: not in enabled drivers build config 00:01:35.106 net/ark: not in enabled drivers build config 00:01:35.106 net/atlantic: not in enabled drivers build config 00:01:35.106 net/avp: not in enabled drivers build config 00:01:35.106 net/axgbe: not in enabled drivers build config 00:01:35.106 net/bnx2x: not in enabled drivers build config 00:01:35.106 net/bnxt: not in enabled drivers build config 00:01:35.106 net/bonding: not in enabled drivers build config 00:01:35.106 net/cnxk: not in enabled drivers build config 00:01:35.106 net/cxgbe: not in enabled drivers build config 00:01:35.106 net/dpaa: not in enabled drivers build config 00:01:35.106 net/dpaa2: not in enabled drivers build config 00:01:35.106 net/e1000: not in enabled drivers build config 00:01:35.106 net/ena: not in enabled drivers build config 00:01:35.106 net/enetc: not in enabled drivers build config 00:01:35.106 net/enetfec: not in enabled drivers build config 00:01:35.106 net/enic: not in enabled drivers build config 00:01:35.106 net/failsafe: not in enabled drivers build config 00:01:35.106 net/fm10k: not in enabled drivers build config 00:01:35.106 net/gve: not in enabled drivers build config 00:01:35.106 net/hinic: not in enabled drivers build config 00:01:35.106 net/hns3: not in enabled drivers build config 00:01:35.106 net/iavf: not in enabled drivers build config 00:01:35.106 net/ice: not in enabled drivers build config 00:01:35.106 net/idpf: not in enabled drivers build config 00:01:35.106 net/igc: not in enabled drivers build config 00:01:35.106 net/ionic: not in enabled drivers build config 00:01:35.106 net/ipn3ke: not in enabled drivers build config 00:01:35.106 net/ixgbe: not in enabled drivers build config 00:01:35.106 net/kni: not in enabled drivers build config 00:01:35.106 net/liquidio: not in enabled drivers build config 00:01:35.106 net/mana: not in enabled drivers build config 00:01:35.106 net/memif: not in enabled drivers build config 00:01:35.106 net/mlx4: not in enabled drivers build config 00:01:35.106 net/mlx5: not in enabled drivers build config 00:01:35.106 net/mvneta: not in enabled drivers build config 00:01:35.106 net/mvpp2: not in enabled drivers build config 00:01:35.106 net/netvsc: not in enabled drivers build config 00:01:35.106 net/nfb: not in enabled drivers build config 00:01:35.106 net/nfp: not in enabled drivers build config 00:01:35.106 net/ngbe: not in enabled drivers build config 00:01:35.106 net/null: not in enabled drivers build config 00:01:35.106 net/octeontx: not in enabled drivers build config 00:01:35.106 net/octeon_ep: not in enabled drivers build config 00:01:35.106 net/pcap: not in enabled drivers build config 00:01:35.106 net/pfe: not in enabled drivers build config 00:01:35.106 net/qede: not in enabled drivers build config 00:01:35.106 net/ring: not in enabled drivers build config 00:01:35.106 net/sfc: not in enabled drivers build config 00:01:35.106 net/softnic: not in enabled drivers build config 00:01:35.106 net/tap: not in enabled drivers build config 00:01:35.106 net/thunderx: not in enabled drivers build config 00:01:35.106 net/txgbe: not in enabled drivers build config 00:01:35.106 net/vdev_netvsc: not in enabled drivers build config 00:01:35.106 net/vhost: not in enabled drivers build config 00:01:35.106 net/virtio: not in enabled drivers build config 00:01:35.106 net/vmxnet3: not in enabled drivers build config 00:01:35.106 raw/cnxk_bphy: not in enabled drivers build config 00:01:35.106 raw/cnxk_gpio: not in enabled drivers build config 00:01:35.106 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:35.106 raw/ifpga: not in enabled drivers build config 00:01:35.106 raw/ntb: not in enabled drivers build config 00:01:35.106 raw/skeleton: not in enabled drivers build config 00:01:35.106 crypto/armv8: not in enabled drivers build config 00:01:35.106 crypto/bcmfs: not in enabled drivers build config 00:01:35.106 crypto/caam_jr: not in enabled drivers build config 00:01:35.106 crypto/ccp: not in enabled drivers build config 00:01:35.106 crypto/cnxk: not in enabled drivers build config 00:01:35.106 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.106 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.106 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.106 crypto/mlx5: not in enabled drivers build config 00:01:35.106 crypto/mvsam: not in enabled drivers build config 00:01:35.106 crypto/nitrox: not in enabled drivers build config 00:01:35.106 crypto/null: not in enabled drivers build config 00:01:35.106 crypto/octeontx: not in enabled drivers build config 00:01:35.106 crypto/openssl: not in enabled drivers build config 00:01:35.106 crypto/scheduler: not in enabled drivers build config 00:01:35.106 crypto/uadk: not in enabled drivers build config 00:01:35.106 crypto/virtio: not in enabled drivers build config 00:01:35.106 compress/isal: not in enabled drivers build config 00:01:35.106 compress/mlx5: not in enabled drivers build config 00:01:35.106 compress/octeontx: not in enabled drivers build config 00:01:35.106 compress/zlib: not in enabled drivers build config 00:01:35.106 regex/mlx5: not in enabled drivers build config 00:01:35.106 regex/cn9k: not in enabled drivers build config 00:01:35.106 vdpa/ifc: not in enabled drivers build config 00:01:35.106 vdpa/mlx5: not in enabled drivers build config 00:01:35.106 vdpa/sfc: not in enabled drivers build config 00:01:35.106 event/cnxk: not in enabled drivers build config 00:01:35.106 event/dlb2: not in enabled drivers build config 00:01:35.106 event/dpaa: not in enabled drivers build config 00:01:35.106 event/dpaa2: not in enabled drivers build config 00:01:35.106 event/dsw: not in enabled drivers build config 00:01:35.106 event/opdl: not in enabled drivers build config 00:01:35.106 event/skeleton: not in enabled drivers build config 00:01:35.106 event/sw: not in enabled drivers build config 00:01:35.106 event/octeontx: not in enabled drivers build config 00:01:35.106 baseband/acc: not in enabled drivers build config 00:01:35.106 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:35.106 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:35.106 baseband/la12xx: not in enabled drivers build config 00:01:35.106 baseband/null: not in enabled drivers build config 00:01:35.106 baseband/turbo_sw: not in enabled drivers build config 00:01:35.106 gpu/cuda: not in enabled drivers build config 00:01:35.106 00:01:35.106 00:01:35.106 Build targets in project: 316 00:01:35.106 00:01:35.106 DPDK 22.11.4 00:01:35.106 00:01:35.106 User defined options 00:01:35.106 libdir : lib 00:01:35.106 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.107 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:35.107 c_link_args : 00:01:35.107 enable_docs : false 00:01:35.107 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:35.107 enable_kmods : false 00:01:35.107 machine : native 00:01:35.107 tests : false 00:01:35.107 00:01:35.107 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:35.107 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:35.107 19:49:22 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:35.107 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:35.107 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:35.107 [2/745] Generating lib/rte_telemetry_def with a custom command 00:01:35.107 [3/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:35.107 [4/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:35.107 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:35.107 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:35.107 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:35.107 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:35.376 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:35.376 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:35.376 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:35.376 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:35.376 [13/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:35.376 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:35.376 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:35.376 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:35.376 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:35.376 [18/745] Linking static target lib/librte_kvargs.a 00:01:35.376 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:35.376 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:35.376 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:35.376 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:35.376 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:35.376 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:35.376 [25/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:35.376 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:35.376 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:35.376 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:35.376 [29/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:35.376 [30/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:35.376 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:35.376 [32/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:35.376 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:35.376 [34/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:35.376 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:35.376 [36/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:35.376 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:35.376 [38/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:35.376 [39/745] Generating lib/rte_eal_def with a custom command 00:01:35.376 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:35.376 [41/745] Generating lib/rte_eal_mingw with a custom command 00:01:35.376 [42/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.376 [43/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:35.376 [44/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:35.376 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:35.376 [46/745] Generating lib/rte_ring_def with a custom command 00:01:35.376 [47/745] Generating lib/rte_ring_mingw with a custom command 00:01:35.644 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:35.644 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.644 [50/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.644 [51/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:35.644 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:35.644 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:35.644 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:35.644 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:35.644 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.644 [57/745] Generating lib/rte_mempool_def with a custom command 00:01:35.644 [58/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:35.644 [59/745] Generating lib/rte_mbuf_def with a custom command 00:01:35.644 [60/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:35.644 [61/745] Generating lib/rte_mempool_mingw with a custom command 00:01:35.644 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:35.644 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:35.644 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:35.644 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:35.644 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.644 [67/745] Generating lib/rte_net_def with a custom command 00:01:35.644 [68/745] Generating lib/rte_net_mingw with a custom command 00:01:35.644 [69/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:35.644 [70/745] Generating lib/rte_meter_def with a custom command 00:01:35.644 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:35.644 [72/745] Generating lib/rte_meter_mingw with a custom command 00:01:35.644 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:35.644 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.644 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:35.644 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:35.644 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:35.644 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.644 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:35.644 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.904 [81/745] Linking static target lib/librte_ring.a 00:01:35.904 [82/745] Linking target lib/librte_kvargs.so.23.0 00:01:35.904 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:35.904 [84/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:35.904 [85/745] Generating lib/rte_pci_def with a custom command 00:01:35.904 [86/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.904 [87/745] Linking static target lib/librte_meter.a 00:01:35.904 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.904 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.904 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:35.904 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.904 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.904 [93/745] Linking static target lib/librte_pci.a 00:01:35.904 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.904 [95/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:36.162 [96/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:36.162 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:36.163 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:36.163 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:36.163 [100/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.163 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.163 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:36.163 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:36.163 [104/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:36.163 [105/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:36.163 [106/745] Linking static target lib/librte_telemetry.a 00:01:36.163 [107/745] Generating lib/rte_cmdline_def with a custom command 00:01:36.163 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:36.163 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:36.163 [110/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:36.424 [111/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.424 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:36.424 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:36.424 [114/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:36.424 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:36.424 [116/745] Generating lib/rte_metrics_mingw with a custom command 00:01:36.424 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.424 [118/745] Generating lib/rte_hash_def with a custom command 00:01:36.424 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:36.424 [120/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.424 [121/745] Generating lib/rte_timer_def with a custom command 00:01:36.424 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:36.686 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.686 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:36.686 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.686 [126/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:36.686 [127/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.686 [128/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:36.686 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:36.686 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:36.686 [131/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:36.686 [132/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:36.686 [133/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:36.686 [134/745] Generating lib/rte_acl_def with a custom command 00:01:36.686 [135/745] Generating lib/rte_acl_mingw with a custom command 00:01:36.686 [136/745] Generating lib/rte_bbdev_def with a custom command 00:01:36.686 [137/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:36.686 [138/745] Generating lib/rte_bitratestats_def with a custom command 00:01:36.686 [139/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:36.946 [140/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:36.946 [141/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.946 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:36.946 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:36.946 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:36.946 [145/745] Linking target lib/librte_telemetry.so.23.0 00:01:36.946 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:36.946 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.946 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.946 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:36.946 [150/745] Generating lib/rte_bpf_mingw with a custom command 00:01:36.946 [151/745] Generating lib/rte_bpf_def with a custom command 00:01:36.946 [152/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:36.946 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:36.947 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:01:36.947 [155/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:36.947 [156/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.947 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:37.208 [158/745] Generating lib/rte_compressdev_def with a custom command 00:01:37.208 [159/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:37.208 [160/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.208 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:37.208 [162/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:37.208 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:37.208 [164/745] Generating lib/rte_cryptodev_def with a custom command 00:01:37.208 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:37.208 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:37.208 [167/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:37.208 [168/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.208 [169/745] Linking static target lib/librte_rcu.a 00:01:37.208 [170/745] Linking static target lib/librte_cmdline.a 00:01:37.208 [171/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.208 [172/745] Linking static target lib/librte_timer.a 00:01:37.208 [173/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:37.208 [174/745] Generating lib/rte_distributor_def with a custom command 00:01:37.208 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:37.208 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.208 [177/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:37.208 [178/745] Generating lib/rte_efd_def with a custom command 00:01:37.471 [179/745] Linking static target lib/librte_net.a 00:01:37.471 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:37.471 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:37.471 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:37.471 [183/745] Linking static target lib/librte_metrics.a 00:01:37.471 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:37.471 [185/745] Linking static target lib/librte_cfgfile.a 00:01:37.471 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:37.732 [187/745] Linking static target lib/librte_mempool.a 00:01:37.732 [188/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.732 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.732 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:37.732 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.732 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.732 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.004 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:38.004 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:38.004 [196/745] Linking static target lib/librte_eal.a 00:01:38.004 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:38.004 [198/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:38.004 [199/745] Generating lib/rte_gpudev_def with a custom command 00:01:38.004 [200/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:38.004 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:38.004 [202/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:38.004 [203/745] Linking static target lib/librte_bitratestats.a 00:01:38.004 [204/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:38.004 [205/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:38.004 [206/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:38.004 [207/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.004 [208/745] Generating lib/rte_gro_def with a custom command 00:01:38.265 [209/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:38.265 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:38.265 [211/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.265 [212/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.265 [213/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:38.265 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.533 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:38.533 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.533 [217/745] Generating lib/rte_gso_def with a custom command 00:01:38.533 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:38.533 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:38.533 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:38.533 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:38.533 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.796 [223/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.796 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:38.796 [225/745] Generating lib/rte_ip_frag_def with a custom command 00:01:38.796 [226/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:38.796 [227/745] Linking static target lib/librte_bbdev.a 00:01:38.796 [228/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.796 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:38.796 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:38.796 [231/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:38.796 [232/745] Generating lib/rte_latencystats_def with a custom command 00:01:38.796 [233/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:38.796 [234/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.796 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:38.796 [236/745] Generating lib/rte_lpm_def with a custom command 00:01:38.796 [237/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:38.796 [238/745] Linking static target lib/librte_compressdev.a 00:01:38.796 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:39.059 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:39.059 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:39.059 [242/745] Linking static target lib/librte_jobstats.a 00:01:39.059 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:39.325 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.325 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:39.325 [246/745] Linking static target lib/librte_distributor.a 00:01:39.325 [247/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:39.325 [248/745] Generating lib/rte_member_def with a custom command 00:01:39.325 [249/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:39.325 [250/745] Generating lib/rte_member_mingw with a custom command 00:01:39.325 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:39.584 [252/745] Generating lib/rte_pcapng_def with a custom command 00:01:39.584 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:39.584 [254/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.584 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:39.584 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:39.584 [257/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.584 [258/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:39.584 [259/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:39.584 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.584 [261/745] Linking static target lib/librte_bpf.a 00:01:39.584 [262/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.584 [263/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:39.584 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:39.584 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.584 [266/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:39.584 [267/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:39.584 [268/745] Linking static target lib/librte_gpudev.a 00:01:39.584 [269/745] Generating lib/rte_power_def with a custom command 00:01:39.850 [270/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.850 [271/745] Generating lib/rte_power_mingw with a custom command 00:01:39.850 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:39.850 [273/745] Generating lib/rte_rawdev_def with a custom command 00:01:39.850 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:39.850 [275/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:39.850 [276/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:39.850 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:39.850 [278/745] Linking static target lib/librte_gro.a 00:01:39.850 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:39.850 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.850 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:39.850 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:39.850 [283/745] Generating lib/rte_rib_def with a custom command 00:01:39.850 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:40.109 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:40.109 [286/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:40.109 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:40.109 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:40.109 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.109 [290/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:40.109 [291/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.109 [292/745] Generating lib/rte_sched_def with a custom command 00:01:40.109 [293/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:40.109 [294/745] Generating lib/rte_sched_mingw with a custom command 00:01:40.371 [295/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:40.371 [296/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:40.371 [297/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:40.371 [298/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:40.371 [299/745] Generating lib/rte_security_mingw with a custom command 00:01:40.371 [300/745] Generating lib/rte_security_def with a custom command 00:01:40.371 [301/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.371 [302/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:40.371 [303/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:40.371 [304/745] Linking static target lib/librte_latencystats.a 00:01:40.371 [305/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:40.371 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:40.371 [307/745] Generating lib/rte_stack_def with a custom command 00:01:40.371 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:40.371 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:40.371 [310/745] Linking static target lib/librte_rawdev.a 00:01:40.371 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:40.371 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:40.371 [313/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:40.371 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:40.371 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:40.371 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:40.371 [317/745] Linking static target lib/librte_stack.a 00:01:40.632 [318/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:40.632 [319/745] Generating lib/rte_vhost_mingw with a custom command 00:01:40.632 [320/745] Generating lib/rte_vhost_def with a custom command 00:01:40.632 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.632 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.632 [323/745] Linking static target lib/librte_dmadev.a 00:01:40.632 [324/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:40.632 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:40.632 [326/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.632 [327/745] Linking static target lib/librte_ip_frag.a 00:01:40.632 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:40.632 [329/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:40.894 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:40.894 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.894 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:40.894 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:41.156 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.156 [335/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.156 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:41.157 [337/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.157 [338/745] Generating lib/rte_fib_def with a custom command 00:01:41.157 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:41.157 [340/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.157 [341/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:41.420 [342/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:41.420 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.420 [344/745] Linking static target lib/librte_regexdev.a 00:01:41.420 [345/745] Linking static target lib/librte_gso.a 00:01:41.420 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.705 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:41.705 [348/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:41.705 [349/745] Linking static target lib/librte_pcapng.a 00:01:41.705 [350/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.705 [351/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:41.705 [352/745] Linking static target lib/librte_efd.a 00:01:41.705 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:41.705 [354/745] Linking static target lib/librte_lpm.a 00:01:41.705 [355/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.705 [356/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:41.705 [357/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:42.001 [358/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:42.001 [359/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:42.001 [360/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:42.001 [361/745] Linking static target lib/librte_reorder.a 00:01:42.001 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:42.001 [363/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:42.001 [364/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.001 [365/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:42.001 [366/745] Generating lib/rte_port_def with a custom command 00:01:42.001 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:42.001 [368/745] Linking static target lib/acl/libavx2_tmp.a 00:01:42.001 [369/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.001 [370/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:42.001 [371/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:42.001 [372/745] Generating lib/rte_port_mingw with a custom command 00:01:42.001 [373/745] Generating lib/rte_pdump_def with a custom command 00:01:42.269 [374/745] Generating lib/rte_pdump_mingw with a custom command 00:01:42.269 [375/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:42.269 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:42.269 [377/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.269 [378/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:42.269 [379/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:42.269 [380/745] Linking static target lib/librte_security.a 00:01:42.269 [381/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:42.269 [382/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.269 [383/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.269 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:42.269 [385/745] Linking static target lib/librte_power.a 00:01:42.537 [386/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:42.537 [387/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:42.537 [388/745] Linking static target lib/librte_hash.a 00:01:42.537 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.537 [390/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:42.537 [391/745] Linking static target lib/librte_rib.a 00:01:42.537 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:42.537 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:42.537 [394/745] Linking static target lib/librte_acl.a 00:01:42.537 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:42.802 [396/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:42.802 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:42.802 [398/745] Generating lib/rte_table_def with a custom command 00:01:42.802 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:43.063 [400/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:43.063 [401/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.063 [402/745] Linking static target lib/librte_ethdev.a 00:01:43.064 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.324 [404/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:43.324 [405/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:43.324 [406/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.324 [407/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:43.324 [408/745] Linking static target lib/librte_mbuf.a 00:01:43.324 [409/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:43.324 [410/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:43.584 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:43.584 [412/745] Generating lib/rte_pipeline_def with a custom command 00:01:43.584 [413/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:43.584 [414/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:43.584 [415/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:43.584 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:43.584 [417/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:43.584 [418/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.584 [419/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:43.584 [420/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:43.584 [421/745] Generating lib/rte_graph_mingw with a custom command 00:01:43.584 [422/745] Generating lib/rte_graph_def with a custom command 00:01:43.584 [423/745] Linking static target lib/librte_fib.a 00:01:43.584 [424/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:43.847 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:43.847 [426/745] Linking static target lib/librte_eventdev.a 00:01:43.847 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:43.847 [428/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:43.847 [429/745] Linking static target lib/librte_member.a 00:01:43.847 [430/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.847 [431/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:43.847 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:43.847 [433/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:43.847 [434/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:43.847 [435/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:43.847 [436/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:44.109 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:44.109 [438/745] Generating lib/rte_node_def with a custom command 00:01:44.109 [439/745] Generating lib/rte_node_mingw with a custom command 00:01:44.109 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.109 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.109 [442/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.109 [443/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:44.110 [444/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:44.110 [445/745] Linking static target lib/librte_sched.a 00:01:44.374 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:44.374 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:44.374 [448/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:44.374 [449/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:44.374 [450/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:44.374 [451/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:44.374 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.374 [453/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.374 [454/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:44.374 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:44.374 [456/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:44.641 [457/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:44.641 [458/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:44.641 [459/745] Linking static target lib/librte_cryptodev.a 00:01:44.641 [460/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:44.641 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:44.641 [462/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:44.641 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:44.641 [464/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:44.641 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:44.641 [466/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:44.641 [467/745] Linking static target lib/librte_pdump.a 00:01:44.641 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:44.901 [469/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:44.901 [470/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:44.901 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:44.901 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:44.901 [473/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:44.901 [474/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:44.901 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:44.901 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:44.901 [477/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.901 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:45.164 [479/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:45.164 [480/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:45.164 [481/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:45.164 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:45.164 [483/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:45.164 [484/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.164 [485/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.164 [486/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:45.164 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:45.164 [488/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:45.164 [489/745] Linking static target lib/librte_table.a 00:01:45.428 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:45.428 [491/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:45.428 [492/745] Linking static target lib/librte_ipsec.a 00:01:45.428 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.428 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:45.692 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.692 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:45.692 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:45.692 [498/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:45.692 [499/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:45.951 [500/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:45.951 [501/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:45.951 [502/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:45.951 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:45.951 [504/745] Linking static target lib/librte_graph.a 00:01:45.951 [505/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.951 [506/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.951 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:45.951 [508/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:45.951 [509/745] Linking static target drivers/librte_bus_pci.a 00:01:45.951 [510/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:45.951 [511/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:46.214 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:46.214 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:46.214 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.471 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:46.471 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.737 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:46.737 [518/745] Linking static target lib/librte_port.a 00:01:46.737 [519/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.737 [520/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:46.737 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:46.995 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:46.995 [523/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:46.995 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:46.995 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:46.995 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:47.257 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:47.257 [528/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:47.257 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.257 [530/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.519 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.519 [532/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.519 [533/745] Linking static target drivers/librte_mempool_ring.a 00:01:47.519 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:47.519 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:47.519 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:47.519 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:47.519 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:47.519 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.784 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:47.784 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.049 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:48.307 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:48.307 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:48.307 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:48.307 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:48.570 [547/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:48.570 [548/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:48.570 [549/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:48.570 [550/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:48.570 [551/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:48.836 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:48.836 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:48.836 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:48.836 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:49.099 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:49.099 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:49.360 [558/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:49.360 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:49.620 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:49.620 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:49.620 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:49.620 [563/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:49.620 [564/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:49.883 [565/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:49.883 [566/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:49.883 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:49.883 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:49.883 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:49.883 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:50.148 [571/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:50.148 [572/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:50.148 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:50.148 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:50.411 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:50.411 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:50.411 [577/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.411 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:50.412 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:50.412 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:50.412 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:50.412 [582/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:50.674 [583/745] Linking target lib/librte_eal.so.23.0 00:01:50.674 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:50.674 [585/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:50.674 [586/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:50.674 [587/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.674 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:50.933 [589/745] Linking target lib/librte_ring.so.23.0 00:01:50.933 [590/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:51.198 [591/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:51.198 [592/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:51.198 [593/745] Linking target lib/librte_meter.so.23.0 00:01:51.198 [594/745] Linking target lib/librte_pci.so.23.0 00:01:51.198 [595/745] Linking target lib/librte_rcu.so.23.0 00:01:51.198 [596/745] Linking target lib/librte_mempool.so.23.0 00:01:51.459 [597/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:51.459 [598/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:51.459 [599/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:51.459 [600/745] Linking target lib/librte_timer.so.23.0 00:01:51.459 [601/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:51.459 [602/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:51.459 [603/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:51.459 [604/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:51.459 [605/745] Linking target lib/librte_acl.so.23.0 00:01:51.459 [606/745] Linking target lib/librte_mbuf.so.23.0 00:01:51.459 [607/745] Linking target lib/librte_jobstats.so.23.0 00:01:51.459 [608/745] Linking target lib/librte_cfgfile.so.23.0 00:01:51.459 [609/745] Linking target lib/librte_rawdev.so.23.0 00:01:51.459 [610/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:51.459 [611/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:51.724 [612/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:51.724 [613/745] Linking target lib/librte_dmadev.so.23.0 00:01:51.724 [614/745] Linking target lib/librte_stack.so.23.0 00:01:51.724 [615/745] Linking target lib/librte_rib.so.23.0 00:01:51.724 [616/745] Linking target lib/librte_graph.so.23.0 00:01:51.724 [617/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:51.724 [618/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:51.724 [619/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:51.724 [620/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:51.724 [621/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:51.724 [622/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:51.724 [623/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:51.724 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:51.724 [625/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:51.724 [626/745] Linking target lib/librte_bbdev.so.23.0 00:01:51.987 [627/745] Linking target lib/librte_gpudev.so.23.0 00:01:51.987 [628/745] Linking target lib/librte_net.so.23.0 00:01:51.987 [629/745] Linking target lib/librte_distributor.so.23.0 00:01:51.987 [630/745] Linking target lib/librte_compressdev.so.23.0 00:01:51.987 [631/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:51.987 [632/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:51.987 [633/745] Linking target lib/librte_cryptodev.so.23.0 00:01:51.987 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:51.987 [635/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:51.987 [636/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:51.987 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:51.987 [638/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:51.987 [639/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:51.987 [640/745] Linking target lib/librte_regexdev.so.23.0 00:01:51.987 [641/745] Linking target lib/librte_fib.so.23.0 00:01:51.987 [642/745] Linking target lib/librte_reorder.so.23.0 00:01:51.987 [643/745] Linking target lib/librte_sched.so.23.0 00:01:51.987 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:51.987 [645/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:51.987 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:51.987 [647/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:51.987 [648/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:51.987 [649/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:51.987 [650/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:52.247 [651/745] Linking target lib/librte_hash.so.23.0 00:01:52.247 [652/745] Linking target lib/librte_ethdev.so.23.0 00:01:52.247 [653/745] Linking target lib/librte_cmdline.so.23.0 00:01:52.247 [654/745] Linking target lib/librte_security.so.23.0 00:01:52.247 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:52.247 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:52.247 [657/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:52.247 [658/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:52.247 [659/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:52.247 [660/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:52.247 [661/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:52.247 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:52.247 [663/745] Linking target lib/librte_metrics.so.23.0 00:01:52.247 [664/745] Linking target lib/librte_pcapng.so.23.0 00:01:52.247 [665/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:52.247 [666/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:52.247 [667/745] Linking target lib/librte_bpf.so.23.0 00:01:52.506 [668/745] Linking target lib/librte_lpm.so.23.0 00:01:52.506 [669/745] Linking target lib/librte_gso.so.23.0 00:01:52.506 [670/745] Linking target lib/librte_gro.so.23.0 00:01:52.506 [671/745] Linking target lib/librte_power.so.23.0 00:01:52.506 [672/745] Linking target lib/librte_efd.so.23.0 00:01:52.506 [673/745] Linking target lib/librte_member.so.23.0 00:01:52.506 [674/745] Linking target lib/librte_ip_frag.so.23.0 00:01:52.506 [675/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:52.506 [676/745] Linking target lib/librte_ipsec.so.23.0 00:01:52.506 [677/745] Linking target lib/librte_eventdev.so.23.0 00:01:52.506 [678/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:52.506 [679/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:52.506 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:52.506 [681/745] Linking target lib/librte_bitratestats.so.23.0 00:01:52.506 [682/745] Linking target lib/librte_latencystats.so.23.0 00:01:52.506 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:52.506 [684/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:52.506 [685/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:52.506 [686/745] Linking target lib/librte_pdump.so.23.0 00:01:52.764 [687/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:52.764 [688/745] Linking target lib/librte_port.so.23.0 00:01:52.764 [689/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:52.764 [690/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:52.764 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:52.764 [692/745] Linking target lib/librte_table.so.23.0 00:01:53.023 [693/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:53.023 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:53.023 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:53.281 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:53.539 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:53.539 [698/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:53.539 [699/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:53.797 [700/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:54.055 [701/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:54.055 [702/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:54.055 [703/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:54.313 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:54.313 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:54.313 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.313 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.571 [708/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:54.571 [709/745] Linking static target drivers/librte_net_i40e.a 00:01:54.571 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:55.137 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.137 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:55.700 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:55.700 [714/745] Linking static target lib/librte_node.a 00:01:55.959 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.959 [716/745] Linking target lib/librte_node.so.23.0 00:01:55.959 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:56.558 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:57.493 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:05.601 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:37.661 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.661 [722/745] Linking static target lib/librte_vhost.a 00:02:37.661 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.661 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:52.529 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:52.529 [726/745] Linking static target lib/librte_pipeline.a 00:02:52.529 [727/745] Linking target app/dpdk-dumpcap 00:02:52.529 [728/745] Linking target app/dpdk-test-cmdline 00:02:52.529 [729/745] Linking target app/dpdk-test-acl 00:02:52.529 [730/745] Linking target app/dpdk-test-fib 00:02:52.529 [731/745] Linking target app/dpdk-test-regex 00:02:52.529 [732/745] Linking target app/dpdk-test-flow-perf 00:02:52.529 [733/745] Linking target app/dpdk-test-security-perf 00:02:52.529 [734/745] Linking target app/dpdk-pdump 00:02:52.529 [735/745] Linking target app/dpdk-proc-info 00:02:52.529 [736/745] Linking target app/dpdk-test-sad 00:02:52.529 [737/745] Linking target app/dpdk-test-compress-perf 00:02:52.529 [738/745] Linking target app/dpdk-test-gpudev 00:02:52.529 [739/745] Linking target app/dpdk-test-bbdev 00:02:52.529 [740/745] Linking target app/dpdk-test-pipeline 00:02:52.529 [741/745] Linking target app/dpdk-test-eventdev 00:02:52.529 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:52.529 [743/745] Linking target app/dpdk-testpmd 00:02:53.903 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.161 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:54.161 19:50:41 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:54.161 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:54.161 [0/1] Installing files. 00:02:54.426 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:54.426 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.427 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.428 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.429 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.430 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:54.431 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:54.431 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.431 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.749 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:54.750 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.014 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.015 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.015 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.015 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.015 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.015 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.015 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.015 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.015 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.015 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:55.015 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.015 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.016 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.017 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.018 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:55.019 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:55.019 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:55.019 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:55.019 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:55.019 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:55.019 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:55.019 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:55.019 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:55.019 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:55.019 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:55.019 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:55.019 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:55.019 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:55.019 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:55.020 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:55.020 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:55.020 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:55.020 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:55.020 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:55.020 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:55.020 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:55.020 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:55.020 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:55.020 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:55.020 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:55.020 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:55.020 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:55.020 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:55.020 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:55.020 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:55.020 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:55.020 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:55.020 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:55.020 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:55.020 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:55.020 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:55.020 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:55.020 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:55.020 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:55.020 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:55.020 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:55.020 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:55.020 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:55.020 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:55.020 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:55.020 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:55.020 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:55.020 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:55.020 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:55.020 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:55.020 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:55.020 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:55.020 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:55.020 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:55.020 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:55.020 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:55.020 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:55.020 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:55.020 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:55.020 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:55.020 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:55.020 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:55.020 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:55.020 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:55.020 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:55.020 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:55.020 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:55.020 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:55.021 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:55.021 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:55.021 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:55.021 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:55.021 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:55.021 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:55.021 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:55.021 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:55.021 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:55.021 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:55.021 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:55.021 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:55.021 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:55.021 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:55.021 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:55.021 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:55.021 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:55.021 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:55.021 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:55.021 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:55.021 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:55.021 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:55.021 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:55.021 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:55.021 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:55.021 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:55.021 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:55.021 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:55.021 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:55.021 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:55.021 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:55.021 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:55.021 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:55.021 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:55.021 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:55.021 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:55.021 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:55.021 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:55.021 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:55.021 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:55.021 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:55.021 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:55.021 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:55.021 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:55.021 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:55.021 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:55.021 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:55.021 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:55.021 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:55.021 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:55.021 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:55.021 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:55.021 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:55.021 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:55.021 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:55.021 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:55.021 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:55.021 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:55.021 19:50:42 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:55.021 19:50:42 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:55.021 19:50:42 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:55.021 19:50:42 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:55.021 00:02:55.021 real 1m25.478s 00:02:55.021 user 14m24.484s 00:02:55.021 sys 1m48.307s 00:02:55.021 19:50:42 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:55.021 19:50:42 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:55.021 ************************************ 00:02:55.021 END TEST build_native_dpdk 00:02:55.021 ************************************ 00:02:55.021 19:50:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.021 19:50:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.021 19:50:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.021 19:50:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.021 19:50:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.021 19:50:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.021 19:50:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.021 19:50:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:55.279 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:55.279 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:55.279 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:55.279 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:55.538 Using 'verbs' RDMA provider 00:03:06.143 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:14.258 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:14.515 Creating mk/config.mk...done. 00:03:14.515 Creating mk/cc.flags.mk...done. 00:03:14.515 Type 'make' to build. 00:03:14.515 19:51:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:14.515 19:51:02 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:14.516 19:51:02 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:14.516 19:51:02 -- common/autotest_common.sh@10 -- $ set +x 00:03:14.516 ************************************ 00:03:14.516 START TEST make 00:03:14.516 ************************************ 00:03:14.516 19:51:02 make -- common/autotest_common.sh@1121 -- $ make -j48 00:03:14.774 make[1]: Nothing to be done for 'all'. 00:03:16.167 The Meson build system 00:03:16.167 Version: 1.3.1 00:03:16.167 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:16.167 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:16.167 Build type: native build 00:03:16.167 Project name: libvfio-user 00:03:16.167 Project version: 0.0.1 00:03:16.167 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:16.167 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:16.167 Host machine cpu family: x86_64 00:03:16.167 Host machine cpu: x86_64 00:03:16.167 Run-time dependency threads found: YES 00:03:16.167 Library dl found: YES 00:03:16.167 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:16.167 Run-time dependency json-c found: YES 0.17 00:03:16.167 Run-time dependency cmocka found: YES 1.1.7 00:03:16.167 Program pytest-3 found: NO 00:03:16.167 Program flake8 found: NO 00:03:16.167 Program misspell-fixer found: NO 00:03:16.167 Program restructuredtext-lint found: NO 00:03:16.167 Program valgrind found: YES (/usr/bin/valgrind) 00:03:16.167 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:16.167 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:16.167 Compiler for C supports arguments -Wwrite-strings: YES 00:03:16.167 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:16.167 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:16.167 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:16.167 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:16.167 Build targets in project: 8 00:03:16.167 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:16.167 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:16.167 00:03:16.167 libvfio-user 0.0.1 00:03:16.167 00:03:16.167 User defined options 00:03:16.167 buildtype : debug 00:03:16.167 default_library: shared 00:03:16.167 libdir : /usr/local/lib 00:03:16.167 00:03:16.167 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:17.112 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:17.112 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:17.112 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:17.112 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:17.112 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:17.112 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:17.112 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:17.375 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:17.375 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:17.376 [9/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:17.376 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:17.376 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:17.376 [12/37] Compiling C object samples/null.p/null.c.o 00:03:17.376 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:17.376 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:17.376 [15/37] Compiling C object samples/server.p/server.c.o 00:03:17.376 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:17.376 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:17.376 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:17.376 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:17.376 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:17.376 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:17.376 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:17.376 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:17.376 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:17.376 [25/37] Compiling C object samples/client.p/client.c.o 00:03:17.376 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:17.376 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:17.376 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:17.376 [29/37] Linking target samples/client 00:03:17.639 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:17.639 [31/37] Linking target test/unit_tests 00:03:17.639 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:17.639 [33/37] Linking target samples/server 00:03:17.639 [34/37] Linking target samples/shadow_ioeventfd_server 00:03:17.639 [35/37] Linking target samples/null 00:03:17.639 [36/37] Linking target samples/gpio-pci-idio-16 00:03:17.639 [37/37] Linking target samples/lspci 00:03:17.639 INFO: autodetecting backend as ninja 00:03:17.639 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:17.904 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:18.484 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:18.484 ninja: no work to do. 00:03:30.782 CC lib/log/log.o 00:03:30.782 CC lib/ut/ut.o 00:03:30.782 CC lib/log/log_flags.o 00:03:30.782 CC lib/log/log_deprecated.o 00:03:30.782 CC lib/ut_mock/mock.o 00:03:30.782 LIB libspdk_ut.a 00:03:30.782 LIB libspdk_log.a 00:03:30.782 LIB libspdk_ut_mock.a 00:03:30.782 SO libspdk_ut.so.2.0 00:03:30.782 SO libspdk_ut_mock.so.6.0 00:03:30.782 SO libspdk_log.so.7.0 00:03:30.782 SYMLINK libspdk_ut.so 00:03:30.782 SYMLINK libspdk_ut_mock.so 00:03:30.782 SYMLINK libspdk_log.so 00:03:30.782 CC lib/dma/dma.o 00:03:30.782 CXX lib/trace_parser/trace.o 00:03:30.782 CC lib/ioat/ioat.o 00:03:30.782 CC lib/util/base64.o 00:03:30.782 CC lib/util/bit_array.o 00:03:30.782 CC lib/util/cpuset.o 00:03:30.782 CC lib/util/crc16.o 00:03:30.782 CC lib/util/crc32.o 00:03:30.782 CC lib/util/crc32c.o 00:03:30.782 CC lib/util/crc32_ieee.o 00:03:30.782 CC lib/util/crc64.o 00:03:30.782 CC lib/util/dif.o 00:03:30.782 CC lib/util/fd.o 00:03:30.782 CC lib/util/file.o 00:03:30.782 CC lib/util/hexlify.o 00:03:30.782 CC lib/util/iov.o 00:03:30.782 CC lib/util/math.o 00:03:30.782 CC lib/util/pipe.o 00:03:30.782 CC lib/util/strerror_tls.o 00:03:30.782 CC lib/util/string.o 00:03:30.782 CC lib/util/uuid.o 00:03:30.782 CC lib/util/fd_group.o 00:03:30.782 CC lib/util/xor.o 00:03:30.782 CC lib/util/zipf.o 00:03:30.782 CC lib/vfio_user/host/vfio_user_pci.o 00:03:30.782 CC lib/vfio_user/host/vfio_user.o 00:03:31.040 LIB libspdk_dma.a 00:03:31.040 SO libspdk_dma.so.4.0 00:03:31.040 SYMLINK libspdk_dma.so 00:03:31.040 LIB libspdk_ioat.a 00:03:31.040 SO libspdk_ioat.so.7.0 00:03:31.040 SYMLINK libspdk_ioat.so 00:03:31.040 LIB libspdk_vfio_user.a 00:03:31.298 SO libspdk_vfio_user.so.5.0 00:03:31.298 SYMLINK libspdk_vfio_user.so 00:03:31.298 LIB libspdk_util.a 00:03:31.298 SO libspdk_util.so.9.0 00:03:31.556 SYMLINK libspdk_util.so 00:03:31.813 LIB libspdk_trace_parser.a 00:03:31.813 CC lib/json/json_parse.o 00:03:31.813 CC lib/idxd/idxd.o 00:03:31.813 CC lib/rdma/common.o 00:03:31.813 CC lib/conf/conf.o 00:03:31.813 CC lib/vmd/vmd.o 00:03:31.813 CC lib/env_dpdk/env.o 00:03:31.813 CC lib/vmd/led.o 00:03:31.813 CC lib/rdma/rdma_verbs.o 00:03:31.813 CC lib/idxd/idxd_user.o 00:03:31.813 CC lib/json/json_util.o 00:03:31.813 CC lib/env_dpdk/memory.o 00:03:31.813 CC lib/json/json_write.o 00:03:31.813 CC lib/idxd/idxd_kernel.o 00:03:31.813 CC lib/env_dpdk/pci.o 00:03:31.813 CC lib/env_dpdk/init.o 00:03:31.813 CC lib/env_dpdk/threads.o 00:03:31.813 CC lib/env_dpdk/pci_ioat.o 00:03:31.813 CC lib/env_dpdk/pci_virtio.o 00:03:31.813 CC lib/env_dpdk/pci_vmd.o 00:03:31.813 CC lib/env_dpdk/pci_idxd.o 00:03:31.813 CC lib/env_dpdk/pci_event.o 00:03:31.813 CC lib/env_dpdk/sigbus_handler.o 00:03:31.813 CC lib/env_dpdk/pci_dpdk.o 00:03:31.813 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:31.813 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:31.813 SO libspdk_trace_parser.so.5.0 00:03:31.813 SYMLINK libspdk_trace_parser.so 00:03:32.093 LIB libspdk_conf.a 00:03:32.093 SO libspdk_conf.so.6.0 00:03:32.093 LIB libspdk_rdma.a 00:03:32.093 SYMLINK libspdk_conf.so 00:03:32.093 SO libspdk_rdma.so.6.0 00:03:32.093 LIB libspdk_json.a 00:03:32.093 SYMLINK libspdk_rdma.so 00:03:32.093 SO libspdk_json.so.6.0 00:03:32.351 SYMLINK libspdk_json.so 00:03:32.351 LIB libspdk_idxd.a 00:03:32.351 SO libspdk_idxd.so.12.0 00:03:32.351 CC lib/jsonrpc/jsonrpc_server.o 00:03:32.351 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:32.351 CC lib/jsonrpc/jsonrpc_client.o 00:03:32.351 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:32.351 LIB libspdk_vmd.a 00:03:32.351 SYMLINK libspdk_idxd.so 00:03:32.351 SO libspdk_vmd.so.6.0 00:03:32.610 SYMLINK libspdk_vmd.so 00:03:32.610 LIB libspdk_jsonrpc.a 00:03:32.610 SO libspdk_jsonrpc.so.6.0 00:03:32.867 SYMLINK libspdk_jsonrpc.so 00:03:32.867 CC lib/rpc/rpc.o 00:03:33.125 LIB libspdk_rpc.a 00:03:33.125 SO libspdk_rpc.so.6.0 00:03:33.125 SYMLINK libspdk_rpc.so 00:03:33.382 CC lib/trace/trace.o 00:03:33.382 CC lib/notify/notify.o 00:03:33.382 CC lib/trace/trace_flags.o 00:03:33.382 CC lib/keyring/keyring.o 00:03:33.382 CC lib/trace/trace_rpc.o 00:03:33.382 CC lib/notify/notify_rpc.o 00:03:33.382 CC lib/keyring/keyring_rpc.o 00:03:33.641 LIB libspdk_notify.a 00:03:33.641 SO libspdk_notify.so.6.0 00:03:33.641 LIB libspdk_keyring.a 00:03:33.641 SYMLINK libspdk_notify.so 00:03:33.641 LIB libspdk_trace.a 00:03:33.641 SO libspdk_keyring.so.1.0 00:03:33.641 SO libspdk_trace.so.10.0 00:03:33.641 SYMLINK libspdk_keyring.so 00:03:33.641 SYMLINK libspdk_trace.so 00:03:33.641 LIB libspdk_env_dpdk.a 00:03:33.899 SO libspdk_env_dpdk.so.14.0 00:03:33.899 CC lib/sock/sock.o 00:03:33.899 CC lib/sock/sock_rpc.o 00:03:33.899 CC lib/thread/thread.o 00:03:33.899 CC lib/thread/iobuf.o 00:03:33.899 SYMLINK libspdk_env_dpdk.so 00:03:34.157 LIB libspdk_sock.a 00:03:34.416 SO libspdk_sock.so.9.0 00:03:34.416 SYMLINK libspdk_sock.so 00:03:34.416 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:34.416 CC lib/nvme/nvme_ctrlr.o 00:03:34.416 CC lib/nvme/nvme_fabric.o 00:03:34.416 CC lib/nvme/nvme_ns_cmd.o 00:03:34.416 CC lib/nvme/nvme_ns.o 00:03:34.416 CC lib/nvme/nvme_pcie_common.o 00:03:34.416 CC lib/nvme/nvme_pcie.o 00:03:34.416 CC lib/nvme/nvme_qpair.o 00:03:34.416 CC lib/nvme/nvme.o 00:03:34.416 CC lib/nvme/nvme_quirks.o 00:03:34.416 CC lib/nvme/nvme_transport.o 00:03:34.416 CC lib/nvme/nvme_discovery.o 00:03:34.416 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:34.416 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:34.416 CC lib/nvme/nvme_tcp.o 00:03:34.416 CC lib/nvme/nvme_opal.o 00:03:34.416 CC lib/nvme/nvme_io_msg.o 00:03:34.416 CC lib/nvme/nvme_poll_group.o 00:03:34.416 CC lib/nvme/nvme_zns.o 00:03:34.416 CC lib/nvme/nvme_stubs.o 00:03:34.416 CC lib/nvme/nvme_auth.o 00:03:34.416 CC lib/nvme/nvme_cuse.o 00:03:34.416 CC lib/nvme/nvme_vfio_user.o 00:03:34.416 CC lib/nvme/nvme_rdma.o 00:03:35.350 LIB libspdk_thread.a 00:03:35.350 SO libspdk_thread.so.10.0 00:03:35.607 SYMLINK libspdk_thread.so 00:03:35.607 CC lib/init/json_config.o 00:03:35.607 CC lib/blob/blobstore.o 00:03:35.607 CC lib/accel/accel.o 00:03:35.607 CC lib/vfu_tgt/tgt_endpoint.o 00:03:35.607 CC lib/virtio/virtio.o 00:03:35.607 CC lib/accel/accel_rpc.o 00:03:35.607 CC lib/blob/request.o 00:03:35.607 CC lib/vfu_tgt/tgt_rpc.o 00:03:35.607 CC lib/virtio/virtio_vhost_user.o 00:03:35.607 CC lib/init/subsystem.o 00:03:35.607 CC lib/blob/zeroes.o 00:03:35.607 CC lib/virtio/virtio_vfio_user.o 00:03:35.607 CC lib/init/subsystem_rpc.o 00:03:35.607 CC lib/accel/accel_sw.o 00:03:35.607 CC lib/blob/blob_bs_dev.o 00:03:35.607 CC lib/virtio/virtio_pci.o 00:03:35.607 CC lib/init/rpc.o 00:03:35.864 LIB libspdk_init.a 00:03:36.122 SO libspdk_init.so.5.0 00:03:36.122 LIB libspdk_vfu_tgt.a 00:03:36.122 LIB libspdk_virtio.a 00:03:36.122 SYMLINK libspdk_init.so 00:03:36.122 SO libspdk_vfu_tgt.so.3.0 00:03:36.122 SO libspdk_virtio.so.7.0 00:03:36.122 SYMLINK libspdk_vfu_tgt.so 00:03:36.122 SYMLINK libspdk_virtio.so 00:03:36.122 CC lib/event/app.o 00:03:36.122 CC lib/event/reactor.o 00:03:36.122 CC lib/event/log_rpc.o 00:03:36.122 CC lib/event/app_rpc.o 00:03:36.122 CC lib/event/scheduler_static.o 00:03:36.686 LIB libspdk_event.a 00:03:36.687 SO libspdk_event.so.13.0 00:03:36.687 SYMLINK libspdk_event.so 00:03:36.687 LIB libspdk_accel.a 00:03:36.687 SO libspdk_accel.so.15.0 00:03:36.944 SYMLINK libspdk_accel.so 00:03:36.944 LIB libspdk_nvme.a 00:03:36.944 SO libspdk_nvme.so.13.0 00:03:36.944 CC lib/bdev/bdev.o 00:03:36.944 CC lib/bdev/bdev_rpc.o 00:03:36.944 CC lib/bdev/bdev_zone.o 00:03:36.944 CC lib/bdev/part.o 00:03:36.944 CC lib/bdev/scsi_nvme.o 00:03:37.202 SYMLINK libspdk_nvme.so 00:03:39.100 LIB libspdk_blob.a 00:03:39.101 SO libspdk_blob.so.11.0 00:03:39.101 SYMLINK libspdk_blob.so 00:03:39.101 CC lib/lvol/lvol.o 00:03:39.101 CC lib/blobfs/blobfs.o 00:03:39.101 CC lib/blobfs/tree.o 00:03:39.359 LIB libspdk_bdev.a 00:03:39.617 SO libspdk_bdev.so.15.0 00:03:39.617 SYMLINK libspdk_bdev.so 00:03:39.887 CC lib/ublk/ublk.o 00:03:39.887 CC lib/nbd/nbd.o 00:03:39.887 CC lib/scsi/dev.o 00:03:39.887 CC lib/ublk/ublk_rpc.o 00:03:39.887 CC lib/nbd/nbd_rpc.o 00:03:39.887 CC lib/scsi/lun.o 00:03:39.887 CC lib/scsi/port.o 00:03:39.887 CC lib/scsi/scsi.o 00:03:39.887 CC lib/scsi/scsi_bdev.o 00:03:39.887 CC lib/scsi/scsi_pr.o 00:03:39.887 CC lib/scsi/scsi_rpc.o 00:03:39.887 CC lib/nvmf/ctrlr.o 00:03:39.887 CC lib/ftl/ftl_core.o 00:03:39.887 CC lib/scsi/task.o 00:03:39.887 CC lib/nvmf/ctrlr_discovery.o 00:03:39.887 CC lib/ftl/ftl_init.o 00:03:39.887 CC lib/ftl/ftl_layout.o 00:03:39.887 CC lib/nvmf/ctrlr_bdev.o 00:03:39.887 CC lib/nvmf/subsystem.o 00:03:39.887 CC lib/ftl/ftl_debug.o 00:03:39.887 CC lib/ftl/ftl_io.o 00:03:39.887 CC lib/nvmf/nvmf.o 00:03:39.887 CC lib/ftl/ftl_l2p.o 00:03:39.887 CC lib/ftl/ftl_sb.o 00:03:39.887 CC lib/nvmf/nvmf_rpc.o 00:03:39.887 CC lib/nvmf/transport.o 00:03:39.887 CC lib/nvmf/tcp.o 00:03:39.887 CC lib/ftl/ftl_l2p_flat.o 00:03:39.887 CC lib/ftl/ftl_nv_cache.o 00:03:39.887 CC lib/nvmf/stubs.o 00:03:39.887 CC lib/ftl/ftl_band.o 00:03:39.887 CC lib/ftl/ftl_band_ops.o 00:03:39.887 CC lib/nvmf/mdns_server.o 00:03:39.887 CC lib/nvmf/vfio_user.o 00:03:39.887 CC lib/ftl/ftl_writer.o 00:03:39.887 CC lib/nvmf/rdma.o 00:03:39.887 CC lib/ftl/ftl_rq.o 00:03:39.887 CC lib/nvmf/auth.o 00:03:39.887 CC lib/ftl/ftl_reloc.o 00:03:39.887 CC lib/ftl/ftl_l2p_cache.o 00:03:39.887 CC lib/ftl/ftl_p2l.o 00:03:39.887 CC lib/ftl/mngt/ftl_mngt.o 00:03:39.887 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:39.887 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:39.887 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:39.887 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:39.887 LIB libspdk_blobfs.a 00:03:39.887 SO libspdk_blobfs.so.10.0 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:40.146 SYMLINK libspdk_blobfs.so 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:40.146 LIB libspdk_lvol.a 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:40.146 CC lib/ftl/utils/ftl_conf.o 00:03:40.146 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:40.146 CC lib/ftl/utils/ftl_md.o 00:03:40.146 SO libspdk_lvol.so.10.0 00:03:40.146 CC lib/ftl/utils/ftl_mempool.o 00:03:40.146 CC lib/ftl/utils/ftl_bitmap.o 00:03:40.146 CC lib/ftl/utils/ftl_property.o 00:03:40.146 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:40.408 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:40.408 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:40.408 SYMLINK libspdk_lvol.so 00:03:40.408 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:40.408 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:40.408 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:40.408 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:40.408 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:40.408 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:40.408 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:40.408 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:40.408 CC lib/ftl/base/ftl_base_dev.o 00:03:40.408 CC lib/ftl/base/ftl_base_bdev.o 00:03:40.408 CC lib/ftl/ftl_trace.o 00:03:40.669 LIB libspdk_nbd.a 00:03:40.669 SO libspdk_nbd.so.7.0 00:03:40.669 SYMLINK libspdk_nbd.so 00:03:40.669 LIB libspdk_scsi.a 00:03:40.669 SO libspdk_scsi.so.9.0 00:03:40.928 LIB libspdk_ublk.a 00:03:40.928 SO libspdk_ublk.so.3.0 00:03:40.928 SYMLINK libspdk_scsi.so 00:03:40.928 SYMLINK libspdk_ublk.so 00:03:40.928 CC lib/iscsi/conn.o 00:03:40.928 CC lib/vhost/vhost.o 00:03:40.928 CC lib/iscsi/init_grp.o 00:03:40.928 CC lib/vhost/vhost_rpc.o 00:03:40.928 CC lib/iscsi/iscsi.o 00:03:40.928 CC lib/vhost/vhost_scsi.o 00:03:40.928 CC lib/iscsi/md5.o 00:03:40.928 CC lib/vhost/vhost_blk.o 00:03:40.928 CC lib/iscsi/param.o 00:03:40.928 CC lib/vhost/rte_vhost_user.o 00:03:40.928 CC lib/iscsi/portal_grp.o 00:03:40.928 CC lib/iscsi/tgt_node.o 00:03:40.928 CC lib/iscsi/iscsi_subsystem.o 00:03:40.928 CC lib/iscsi/iscsi_rpc.o 00:03:40.928 CC lib/iscsi/task.o 00:03:41.186 LIB libspdk_ftl.a 00:03:41.445 SO libspdk_ftl.so.9.0 00:03:41.704 SYMLINK libspdk_ftl.so 00:03:42.270 LIB libspdk_vhost.a 00:03:42.270 SO libspdk_vhost.so.8.0 00:03:42.270 LIB libspdk_nvmf.a 00:03:42.529 SYMLINK libspdk_vhost.so 00:03:42.529 SO libspdk_nvmf.so.18.0 00:03:42.529 LIB libspdk_iscsi.a 00:03:42.529 SO libspdk_iscsi.so.8.0 00:03:42.529 SYMLINK libspdk_nvmf.so 00:03:42.786 SYMLINK libspdk_iscsi.so 00:03:43.044 CC module/env_dpdk/env_dpdk_rpc.o 00:03:43.044 CC module/vfu_device/vfu_virtio.o 00:03:43.044 CC module/vfu_device/vfu_virtio_blk.o 00:03:43.044 CC module/vfu_device/vfu_virtio_scsi.o 00:03:43.044 CC module/vfu_device/vfu_virtio_rpc.o 00:03:43.044 CC module/keyring/file/keyring.o 00:03:43.044 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:43.044 CC module/accel/iaa/accel_iaa.o 00:03:43.044 CC module/accel/dsa/accel_dsa.o 00:03:43.044 CC module/keyring/file/keyring_rpc.o 00:03:43.044 CC module/accel/dsa/accel_dsa_rpc.o 00:03:43.044 CC module/accel/iaa/accel_iaa_rpc.o 00:03:43.044 CC module/blob/bdev/blob_bdev.o 00:03:43.044 CC module/accel/error/accel_error.o 00:03:43.044 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:43.044 CC module/accel/ioat/accel_ioat.o 00:03:43.044 CC module/scheduler/gscheduler/gscheduler.o 00:03:43.044 CC module/accel/error/accel_error_rpc.o 00:03:43.044 CC module/keyring/linux/keyring.o 00:03:43.044 CC module/sock/posix/posix.o 00:03:43.044 CC module/accel/ioat/accel_ioat_rpc.o 00:03:43.044 CC module/keyring/linux/keyring_rpc.o 00:03:43.044 LIB libspdk_env_dpdk_rpc.a 00:03:43.044 SO libspdk_env_dpdk_rpc.so.6.0 00:03:43.301 SYMLINK libspdk_env_dpdk_rpc.so 00:03:43.301 LIB libspdk_keyring_linux.a 00:03:43.301 LIB libspdk_keyring_file.a 00:03:43.301 LIB libspdk_scheduler_dpdk_governor.a 00:03:43.301 LIB libspdk_scheduler_gscheduler.a 00:03:43.301 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:43.301 SO libspdk_scheduler_gscheduler.so.4.0 00:03:43.301 SO libspdk_keyring_linux.so.1.0 00:03:43.301 SO libspdk_keyring_file.so.1.0 00:03:43.301 LIB libspdk_accel_error.a 00:03:43.301 LIB libspdk_scheduler_dynamic.a 00:03:43.301 LIB libspdk_accel_ioat.a 00:03:43.301 SO libspdk_accel_error.so.2.0 00:03:43.301 LIB libspdk_accel_iaa.a 00:03:43.301 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:43.301 SYMLINK libspdk_scheduler_gscheduler.so 00:03:43.301 SO libspdk_scheduler_dynamic.so.4.0 00:03:43.301 SO libspdk_accel_ioat.so.6.0 00:03:43.301 SYMLINK libspdk_keyring_linux.so 00:03:43.301 SYMLINK libspdk_keyring_file.so 00:03:43.301 SO libspdk_accel_iaa.so.3.0 00:03:43.301 LIB libspdk_accel_dsa.a 00:03:43.301 SYMLINK libspdk_accel_error.so 00:03:43.301 SYMLINK libspdk_scheduler_dynamic.so 00:03:43.301 LIB libspdk_blob_bdev.a 00:03:43.301 SYMLINK libspdk_accel_ioat.so 00:03:43.301 SO libspdk_accel_dsa.so.5.0 00:03:43.301 SYMLINK libspdk_accel_iaa.so 00:03:43.301 SO libspdk_blob_bdev.so.11.0 00:03:43.559 SYMLINK libspdk_accel_dsa.so 00:03:43.559 SYMLINK libspdk_blob_bdev.so 00:03:43.559 LIB libspdk_vfu_device.a 00:03:43.559 SO libspdk_vfu_device.so.3.0 00:03:43.817 CC module/bdev/aio/bdev_aio.o 00:03:43.817 CC module/bdev/nvme/bdev_nvme.o 00:03:43.817 CC module/bdev/delay/vbdev_delay.o 00:03:43.817 CC module/bdev/aio/bdev_aio_rpc.o 00:03:43.817 CC module/bdev/gpt/gpt.o 00:03:43.817 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:43.817 CC module/blobfs/bdev/blobfs_bdev.o 00:03:43.817 CC module/bdev/passthru/vbdev_passthru.o 00:03:43.817 CC module/bdev/raid/bdev_raid.o 00:03:43.817 CC module/bdev/raid/bdev_raid_rpc.o 00:03:43.817 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:43.817 CC module/bdev/ftl/bdev_ftl.o 00:03:43.817 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:43.817 CC module/bdev/raid/bdev_raid_sb.o 00:03:43.817 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:43.817 CC module/bdev/nvme/nvme_rpc.o 00:03:43.817 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:43.817 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:43.817 CC module/bdev/malloc/bdev_malloc.o 00:03:43.817 CC module/bdev/split/vbdev_split.o 00:03:43.817 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:43.817 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:43.817 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:43.817 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:43.817 CC module/bdev/raid/raid0.o 00:03:43.817 CC module/bdev/raid/raid1.o 00:03:43.817 CC module/bdev/split/vbdev_split_rpc.o 00:03:43.817 CC module/bdev/nvme/bdev_mdns_client.o 00:03:43.817 CC module/bdev/null/bdev_null.o 00:03:43.817 CC module/bdev/iscsi/bdev_iscsi.o 00:03:43.817 CC module/bdev/gpt/vbdev_gpt.o 00:03:43.817 CC module/bdev/nvme/vbdev_opal.o 00:03:43.817 CC module/bdev/raid/concat.o 00:03:43.817 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:43.817 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:43.817 CC module/bdev/null/bdev_null_rpc.o 00:03:43.817 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:43.817 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:43.817 CC module/bdev/error/vbdev_error.o 00:03:43.817 CC module/bdev/error/vbdev_error_rpc.o 00:03:43.817 CC module/bdev/lvol/vbdev_lvol.o 00:03:43.817 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:43.817 SYMLINK libspdk_vfu_device.so 00:03:44.074 LIB libspdk_sock_posix.a 00:03:44.074 SO libspdk_sock_posix.so.6.0 00:03:44.074 LIB libspdk_blobfs_bdev.a 00:03:44.074 SO libspdk_blobfs_bdev.so.6.0 00:03:44.074 LIB libspdk_bdev_split.a 00:03:44.074 LIB libspdk_bdev_delay.a 00:03:44.074 SYMLINK libspdk_blobfs_bdev.so 00:03:44.074 LIB libspdk_bdev_null.a 00:03:44.074 SYMLINK libspdk_sock_posix.so 00:03:44.074 SO libspdk_bdev_split.so.6.0 00:03:44.074 LIB libspdk_bdev_ftl.a 00:03:44.074 SO libspdk_bdev_delay.so.6.0 00:03:44.074 SO libspdk_bdev_null.so.6.0 00:03:44.074 LIB libspdk_bdev_error.a 00:03:44.074 LIB libspdk_bdev_gpt.a 00:03:44.331 LIB libspdk_bdev_passthru.a 00:03:44.331 SO libspdk_bdev_ftl.so.6.0 00:03:44.331 SO libspdk_bdev_error.so.6.0 00:03:44.331 SO libspdk_bdev_gpt.so.6.0 00:03:44.331 SO libspdk_bdev_passthru.so.6.0 00:03:44.331 SYMLINK libspdk_bdev_split.so 00:03:44.331 SYMLINK libspdk_bdev_delay.so 00:03:44.331 LIB libspdk_bdev_iscsi.a 00:03:44.331 SYMLINK libspdk_bdev_null.so 00:03:44.331 LIB libspdk_bdev_zone_block.a 00:03:44.331 SYMLINK libspdk_bdev_ftl.so 00:03:44.331 SO libspdk_bdev_iscsi.so.6.0 00:03:44.331 SO libspdk_bdev_zone_block.so.6.0 00:03:44.331 SYMLINK libspdk_bdev_error.so 00:03:44.331 SYMLINK libspdk_bdev_gpt.so 00:03:44.331 SYMLINK libspdk_bdev_passthru.so 00:03:44.331 LIB libspdk_bdev_aio.a 00:03:44.331 SYMLINK libspdk_bdev_iscsi.so 00:03:44.331 SYMLINK libspdk_bdev_zone_block.so 00:03:44.331 LIB libspdk_bdev_malloc.a 00:03:44.331 SO libspdk_bdev_aio.so.6.0 00:03:44.331 SO libspdk_bdev_malloc.so.6.0 00:03:44.331 SYMLINK libspdk_bdev_aio.so 00:03:44.331 LIB libspdk_bdev_virtio.a 00:03:44.331 SYMLINK libspdk_bdev_malloc.so 00:03:44.331 SO libspdk_bdev_virtio.so.6.0 00:03:44.589 LIB libspdk_bdev_lvol.a 00:03:44.589 SYMLINK libspdk_bdev_virtio.so 00:03:44.589 SO libspdk_bdev_lvol.so.6.0 00:03:44.589 SYMLINK libspdk_bdev_lvol.so 00:03:44.875 LIB libspdk_bdev_raid.a 00:03:44.875 SO libspdk_bdev_raid.so.6.0 00:03:45.132 SYMLINK libspdk_bdev_raid.so 00:03:46.064 LIB libspdk_bdev_nvme.a 00:03:46.064 SO libspdk_bdev_nvme.so.7.0 00:03:46.064 SYMLINK libspdk_bdev_nvme.so 00:03:46.642 CC module/event/subsystems/keyring/keyring.o 00:03:46.642 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:46.642 CC module/event/subsystems/vmd/vmd.o 00:03:46.642 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:46.642 CC module/event/subsystems/scheduler/scheduler.o 00:03:46.642 CC module/event/subsystems/sock/sock.o 00:03:46.642 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:46.642 CC module/event/subsystems/iobuf/iobuf.o 00:03:46.642 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:46.642 LIB libspdk_event_keyring.a 00:03:46.642 LIB libspdk_event_vhost_blk.a 00:03:46.642 LIB libspdk_event_vfu_tgt.a 00:03:46.642 LIB libspdk_event_sock.a 00:03:46.642 LIB libspdk_event_scheduler.a 00:03:46.642 LIB libspdk_event_vmd.a 00:03:46.642 SO libspdk_event_keyring.so.1.0 00:03:46.642 LIB libspdk_event_iobuf.a 00:03:46.642 SO libspdk_event_scheduler.so.4.0 00:03:46.642 SO libspdk_event_vfu_tgt.so.3.0 00:03:46.642 SO libspdk_event_vhost_blk.so.3.0 00:03:46.642 SO libspdk_event_sock.so.5.0 00:03:46.642 SO libspdk_event_vmd.so.6.0 00:03:46.642 SO libspdk_event_iobuf.so.3.0 00:03:46.642 SYMLINK libspdk_event_keyring.so 00:03:46.642 SYMLINK libspdk_event_sock.so 00:03:46.642 SYMLINK libspdk_event_vhost_blk.so 00:03:46.642 SYMLINK libspdk_event_vfu_tgt.so 00:03:46.642 SYMLINK libspdk_event_scheduler.so 00:03:46.642 SYMLINK libspdk_event_vmd.so 00:03:46.642 SYMLINK libspdk_event_iobuf.so 00:03:46.900 CC module/event/subsystems/accel/accel.o 00:03:47.157 LIB libspdk_event_accel.a 00:03:47.157 SO libspdk_event_accel.so.6.0 00:03:47.157 SYMLINK libspdk_event_accel.so 00:03:47.415 CC module/event/subsystems/bdev/bdev.o 00:03:47.415 LIB libspdk_event_bdev.a 00:03:47.415 SO libspdk_event_bdev.so.6.0 00:03:47.672 SYMLINK libspdk_event_bdev.so 00:03:47.672 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:47.672 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:47.672 CC module/event/subsystems/nbd/nbd.o 00:03:47.672 CC module/event/subsystems/scsi/scsi.o 00:03:47.672 CC module/event/subsystems/ublk/ublk.o 00:03:47.930 LIB libspdk_event_nbd.a 00:03:47.930 LIB libspdk_event_ublk.a 00:03:47.930 SO libspdk_event_ublk.so.3.0 00:03:47.930 SO libspdk_event_nbd.so.6.0 00:03:47.930 LIB libspdk_event_scsi.a 00:03:47.930 SO libspdk_event_scsi.so.6.0 00:03:47.930 SYMLINK libspdk_event_ublk.so 00:03:47.930 SYMLINK libspdk_event_nbd.so 00:03:47.930 LIB libspdk_event_nvmf.a 00:03:47.930 SYMLINK libspdk_event_scsi.so 00:03:47.930 SO libspdk_event_nvmf.so.6.0 00:03:47.930 SYMLINK libspdk_event_nvmf.so 00:03:48.189 CC module/event/subsystems/iscsi/iscsi.o 00:03:48.189 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:48.189 LIB libspdk_event_vhost_scsi.a 00:03:48.189 LIB libspdk_event_iscsi.a 00:03:48.189 SO libspdk_event_vhost_scsi.so.3.0 00:03:48.189 SO libspdk_event_iscsi.so.6.0 00:03:48.449 SYMLINK libspdk_event_vhost_scsi.so 00:03:48.449 SYMLINK libspdk_event_iscsi.so 00:03:48.449 SO libspdk.so.6.0 00:03:48.449 SYMLINK libspdk.so 00:03:48.713 CC app/spdk_lspci/spdk_lspci.o 00:03:48.713 CC app/trace_record/trace_record.o 00:03:48.713 TEST_HEADER include/spdk/accel.h 00:03:48.713 TEST_HEADER include/spdk/accel_module.h 00:03:48.713 TEST_HEADER include/spdk/assert.h 00:03:48.713 CXX app/trace/trace.o 00:03:48.713 CC app/spdk_top/spdk_top.o 00:03:48.713 TEST_HEADER include/spdk/barrier.h 00:03:48.713 CC app/spdk_nvme_perf/perf.o 00:03:48.713 TEST_HEADER include/spdk/base64.h 00:03:48.713 CC app/spdk_nvme_identify/identify.o 00:03:48.713 TEST_HEADER include/spdk/bdev.h 00:03:48.713 TEST_HEADER include/spdk/bdev_module.h 00:03:48.713 TEST_HEADER include/spdk/bdev_zone.h 00:03:48.713 CC app/spdk_nvme_discover/discovery_aer.o 00:03:48.713 TEST_HEADER include/spdk/bit_array.h 00:03:48.713 CC test/rpc_client/rpc_client_test.o 00:03:48.713 TEST_HEADER include/spdk/bit_pool.h 00:03:48.713 TEST_HEADER include/spdk/blob_bdev.h 00:03:48.713 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:48.713 TEST_HEADER include/spdk/blobfs.h 00:03:48.713 TEST_HEADER include/spdk/blob.h 00:03:48.713 TEST_HEADER include/spdk/conf.h 00:03:48.713 TEST_HEADER include/spdk/config.h 00:03:48.713 TEST_HEADER include/spdk/cpuset.h 00:03:48.713 TEST_HEADER include/spdk/crc16.h 00:03:48.713 TEST_HEADER include/spdk/crc32.h 00:03:48.713 TEST_HEADER include/spdk/crc64.h 00:03:48.713 TEST_HEADER include/spdk/dif.h 00:03:48.713 TEST_HEADER include/spdk/dma.h 00:03:48.713 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:48.713 CC app/spdk_dd/spdk_dd.o 00:03:48.713 TEST_HEADER include/spdk/endian.h 00:03:48.714 TEST_HEADER include/spdk/env_dpdk.h 00:03:48.714 TEST_HEADER include/spdk/env.h 00:03:48.714 TEST_HEADER include/spdk/event.h 00:03:48.714 CC app/nvmf_tgt/nvmf_main.o 00:03:48.714 TEST_HEADER include/spdk/fd_group.h 00:03:48.714 TEST_HEADER include/spdk/fd.h 00:03:48.714 CC app/vhost/vhost.o 00:03:48.714 CC app/iscsi_tgt/iscsi_tgt.o 00:03:48.714 TEST_HEADER include/spdk/file.h 00:03:48.714 TEST_HEADER include/spdk/ftl.h 00:03:48.714 TEST_HEADER include/spdk/gpt_spec.h 00:03:48.714 TEST_HEADER include/spdk/hexlify.h 00:03:48.714 TEST_HEADER include/spdk/histogram_data.h 00:03:48.714 TEST_HEADER include/spdk/idxd.h 00:03:48.714 TEST_HEADER include/spdk/idxd_spec.h 00:03:48.714 TEST_HEADER include/spdk/init.h 00:03:48.714 TEST_HEADER include/spdk/ioat.h 00:03:48.714 TEST_HEADER include/spdk/ioat_spec.h 00:03:48.714 TEST_HEADER include/spdk/iscsi_spec.h 00:03:48.714 TEST_HEADER include/spdk/json.h 00:03:48.714 CC app/spdk_tgt/spdk_tgt.o 00:03:48.714 TEST_HEADER include/spdk/jsonrpc.h 00:03:48.714 TEST_HEADER include/spdk/keyring.h 00:03:48.714 TEST_HEADER include/spdk/keyring_module.h 00:03:48.714 CC test/app/jsoncat/jsoncat.o 00:03:48.714 CC examples/vmd/led/led.o 00:03:48.714 TEST_HEADER include/spdk/likely.h 00:03:48.714 CC examples/vmd/lsvmd/lsvmd.o 00:03:48.714 TEST_HEADER include/spdk/log.h 00:03:48.714 CC examples/idxd/perf/perf.o 00:03:48.714 CC test/app/histogram_perf/histogram_perf.o 00:03:48.714 CC app/fio/nvme/fio_plugin.o 00:03:48.714 CC examples/ioat/verify/verify.o 00:03:48.714 CC examples/sock/hello_world/hello_sock.o 00:03:48.714 TEST_HEADER include/spdk/lvol.h 00:03:48.714 TEST_HEADER include/spdk/memory.h 00:03:48.714 CC test/event/event_perf/event_perf.o 00:03:48.978 CC examples/ioat/perf/perf.o 00:03:48.979 TEST_HEADER include/spdk/mmio.h 00:03:48.979 CC examples/util/zipf/zipf.o 00:03:48.979 TEST_HEADER include/spdk/nbd.h 00:03:48.979 CC test/nvme/aer/aer.o 00:03:48.979 CC test/app/stub/stub.o 00:03:48.979 CC test/thread/poller_perf/poller_perf.o 00:03:48.979 TEST_HEADER include/spdk/notify.h 00:03:48.979 CC examples/nvme/hello_world/hello_world.o 00:03:48.979 CC test/event/reactor_perf/reactor_perf.o 00:03:48.979 TEST_HEADER include/spdk/nvme.h 00:03:48.979 CC examples/accel/perf/accel_perf.o 00:03:48.979 TEST_HEADER include/spdk/nvme_intel.h 00:03:48.979 CC test/event/reactor/reactor.o 00:03:48.979 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:48.979 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:48.979 TEST_HEADER include/spdk/nvme_spec.h 00:03:48.979 TEST_HEADER include/spdk/nvme_zns.h 00:03:48.979 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:48.979 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:48.979 TEST_HEADER include/spdk/nvmf.h 00:03:48.979 TEST_HEADER include/spdk/nvmf_spec.h 00:03:48.979 TEST_HEADER include/spdk/nvmf_transport.h 00:03:48.979 TEST_HEADER include/spdk/opal.h 00:03:48.979 TEST_HEADER include/spdk/opal_spec.h 00:03:48.979 TEST_HEADER include/spdk/pci_ids.h 00:03:48.979 TEST_HEADER include/spdk/pipe.h 00:03:48.979 TEST_HEADER include/spdk/queue.h 00:03:48.979 TEST_HEADER include/spdk/reduce.h 00:03:48.979 CC test/bdev/bdevio/bdevio.o 00:03:48.979 CC examples/bdev/hello_world/hello_bdev.o 00:03:48.979 TEST_HEADER include/spdk/rpc.h 00:03:48.979 CC examples/blob/hello_world/hello_blob.o 00:03:48.979 CC examples/blob/cli/blobcli.o 00:03:48.979 TEST_HEADER include/spdk/scheduler.h 00:03:48.979 TEST_HEADER include/spdk/scsi.h 00:03:48.979 CC test/accel/dif/dif.o 00:03:48.979 CC app/fio/bdev/fio_plugin.o 00:03:48.979 TEST_HEADER include/spdk/scsi_spec.h 00:03:48.979 TEST_HEADER include/spdk/sock.h 00:03:48.979 CC examples/thread/thread/thread_ex.o 00:03:48.979 CC test/app/bdev_svc/bdev_svc.o 00:03:48.979 CC examples/bdev/bdevperf/bdevperf.o 00:03:48.979 TEST_HEADER include/spdk/stdinc.h 00:03:48.979 TEST_HEADER include/spdk/string.h 00:03:48.979 CC test/dma/test_dma/test_dma.o 00:03:48.979 TEST_HEADER include/spdk/thread.h 00:03:48.979 CC test/blobfs/mkfs/mkfs.o 00:03:48.979 TEST_HEADER include/spdk/trace.h 00:03:48.979 TEST_HEADER include/spdk/trace_parser.h 00:03:48.979 TEST_HEADER include/spdk/tree.h 00:03:48.979 TEST_HEADER include/spdk/ublk.h 00:03:48.979 CC examples/nvmf/nvmf/nvmf.o 00:03:48.979 TEST_HEADER include/spdk/util.h 00:03:48.979 TEST_HEADER include/spdk/uuid.h 00:03:48.979 TEST_HEADER include/spdk/version.h 00:03:48.979 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:48.979 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:48.979 TEST_HEADER include/spdk/vhost.h 00:03:48.979 TEST_HEADER include/spdk/vmd.h 00:03:48.979 TEST_HEADER include/spdk/xor.h 00:03:48.979 TEST_HEADER include/spdk/zipf.h 00:03:48.979 CXX test/cpp_headers/accel.o 00:03:48.979 LINK spdk_lspci 00:03:48.979 CC test/lvol/esnap/esnap.o 00:03:48.979 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:48.979 CC test/env/mem_callbacks/mem_callbacks.o 00:03:48.979 LINK rpc_client_test 00:03:49.240 LINK spdk_nvme_discover 00:03:49.240 LINK lsvmd 00:03:49.240 LINK jsoncat 00:03:49.240 LINK led 00:03:49.240 LINK interrupt_tgt 00:03:49.240 LINK histogram_perf 00:03:49.240 LINK nvmf_tgt 00:03:49.240 LINK reactor 00:03:49.240 LINK reactor_perf 00:03:49.240 LINK zipf 00:03:49.240 LINK event_perf 00:03:49.240 LINK vhost 00:03:49.240 LINK poller_perf 00:03:49.240 LINK spdk_trace_record 00:03:49.240 LINK stub 00:03:49.240 LINK iscsi_tgt 00:03:49.240 LINK spdk_tgt 00:03:49.240 LINK verify 00:03:49.240 LINK ioat_perf 00:03:49.240 LINK bdev_svc 00:03:49.240 LINK hello_sock 00:03:49.240 LINK hello_world 00:03:49.500 LINK mkfs 00:03:49.500 LINK hello_bdev 00:03:49.500 CXX test/cpp_headers/accel_module.o 00:03:49.500 LINK thread 00:03:49.500 LINK hello_blob 00:03:49.500 LINK aer 00:03:49.500 LINK mem_callbacks 00:03:49.500 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:49.500 LINK spdk_dd 00:03:49.500 CXX test/cpp_headers/assert.o 00:03:49.500 LINK idxd_perf 00:03:49.500 CXX test/cpp_headers/barrier.o 00:03:49.500 CXX test/cpp_headers/base64.o 00:03:49.500 CXX test/cpp_headers/bdev.o 00:03:49.500 CC test/env/vtophys/vtophys.o 00:03:49.500 LINK spdk_trace 00:03:49.500 LINK nvmf 00:03:49.763 CC test/nvme/reset/reset.o 00:03:49.763 CC examples/nvme/reconnect/reconnect.o 00:03:49.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:49.763 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:49.763 CC test/nvme/sgl/sgl.o 00:03:49.763 LINK bdevio 00:03:49.763 CXX test/cpp_headers/bdev_module.o 00:03:49.763 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:49.763 CXX test/cpp_headers/bdev_zone.o 00:03:49.763 LINK test_dma 00:03:49.763 CC test/event/app_repeat/app_repeat.o 00:03:49.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:49.763 LINK dif 00:03:49.763 CC test/env/memory/memory_ut.o 00:03:49.763 CC examples/nvme/arbitration/arbitration.o 00:03:49.763 CC test/nvme/e2edp/nvme_dp.o 00:03:49.763 CC test/env/pci/pci_ut.o 00:03:49.763 CXX test/cpp_headers/bit_array.o 00:03:49.763 CC examples/nvme/hotplug/hotplug.o 00:03:49.763 LINK accel_perf 00:03:49.763 CC test/event/scheduler/scheduler.o 00:03:49.763 CXX test/cpp_headers/bit_pool.o 00:03:49.763 LINK nvme_fuzz 00:03:49.763 CC examples/nvme/abort/abort.o 00:03:49.763 CXX test/cpp_headers/blob_bdev.o 00:03:49.763 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.763 CXX test/cpp_headers/blobfs_bdev.o 00:03:49.764 CXX test/cpp_headers/blobfs.o 00:03:49.764 LINK vtophys 00:03:49.764 CXX test/cpp_headers/blob.o 00:03:49.764 LINK blobcli 00:03:50.030 LINK spdk_nvme 00:03:50.030 CXX test/cpp_headers/conf.o 00:03:50.030 CC test/nvme/overhead/overhead.o 00:03:50.030 CC test/nvme/err_injection/err_injection.o 00:03:50.030 CXX test/cpp_headers/config.o 00:03:50.030 LINK spdk_bdev 00:03:50.030 CXX test/cpp_headers/cpuset.o 00:03:50.030 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.030 CC test/nvme/startup/startup.o 00:03:50.030 LINK env_dpdk_post_init 00:03:50.030 CC test/nvme/reserve/reserve.o 00:03:50.030 LINK app_repeat 00:03:50.030 CXX test/cpp_headers/crc16.o 00:03:50.030 CXX test/cpp_headers/crc32.o 00:03:50.030 CXX test/cpp_headers/crc64.o 00:03:50.030 CC test/nvme/simple_copy/simple_copy.o 00:03:50.030 CC test/nvme/connect_stress/connect_stress.o 00:03:50.030 LINK reset 00:03:50.291 CXX test/cpp_headers/dif.o 00:03:50.291 CC test/nvme/boot_partition/boot_partition.o 00:03:50.291 CXX test/cpp_headers/dma.o 00:03:50.291 CXX test/cpp_headers/endian.o 00:03:50.291 CXX test/cpp_headers/env_dpdk.o 00:03:50.291 LINK spdk_nvme_perf 00:03:50.291 LINK sgl 00:03:50.291 CC test/nvme/compliance/nvme_compliance.o 00:03:50.291 LINK cmb_copy 00:03:50.291 CC test/nvme/fused_ordering/fused_ordering.o 00:03:50.291 CXX test/cpp_headers/env.o 00:03:50.291 LINK scheduler 00:03:50.291 CXX test/cpp_headers/event.o 00:03:50.291 CXX test/cpp_headers/fd_group.o 00:03:50.291 CXX test/cpp_headers/fd.o 00:03:50.291 LINK hotplug 00:03:50.291 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:50.291 CXX test/cpp_headers/file.o 00:03:50.291 CXX test/cpp_headers/ftl.o 00:03:50.291 LINK err_injection 00:03:50.291 CXX test/cpp_headers/gpt_spec.o 00:03:50.291 LINK spdk_nvme_identify 00:03:50.291 CC test/nvme/cuse/cuse.o 00:03:50.291 LINK pmr_persistence 00:03:50.291 CC test/nvme/fdp/fdp.o 00:03:50.291 CXX test/cpp_headers/hexlify.o 00:03:50.291 LINK reconnect 00:03:50.291 LINK nvme_dp 00:03:50.291 CXX test/cpp_headers/histogram_data.o 00:03:50.291 LINK startup 00:03:50.554 LINK spdk_top 00:03:50.554 LINK arbitration 00:03:50.554 LINK reserve 00:03:50.554 CXX test/cpp_headers/idxd.o 00:03:50.554 CXX test/cpp_headers/idxd_spec.o 00:03:50.554 LINK bdevperf 00:03:50.554 CXX test/cpp_headers/init.o 00:03:50.554 CXX test/cpp_headers/ioat.o 00:03:50.554 LINK overhead 00:03:50.554 LINK connect_stress 00:03:50.554 CXX test/cpp_headers/ioat_spec.o 00:03:50.554 CXX test/cpp_headers/iscsi_spec.o 00:03:50.554 CXX test/cpp_headers/json.o 00:03:50.554 LINK pci_ut 00:03:50.554 LINK vhost_fuzz 00:03:50.554 CXX test/cpp_headers/jsonrpc.o 00:03:50.554 LINK boot_partition 00:03:50.554 CXX test/cpp_headers/keyring.o 00:03:50.554 CXX test/cpp_headers/keyring_module.o 00:03:50.554 LINK abort 00:03:50.554 CXX test/cpp_headers/likely.o 00:03:50.554 LINK simple_copy 00:03:50.554 LINK nvme_manage 00:03:50.554 CXX test/cpp_headers/log.o 00:03:50.554 CXX test/cpp_headers/lvol.o 00:03:50.554 CXX test/cpp_headers/memory.o 00:03:50.554 CXX test/cpp_headers/mmio.o 00:03:50.554 CXX test/cpp_headers/nbd.o 00:03:50.554 CXX test/cpp_headers/notify.o 00:03:50.554 CXX test/cpp_headers/nvme.o 00:03:50.554 CXX test/cpp_headers/nvme_intel.o 00:03:50.815 CXX test/cpp_headers/nvme_ocssd.o 00:03:50.815 LINK fused_ordering 00:03:50.815 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:50.815 CXX test/cpp_headers/nvme_spec.o 00:03:50.815 CXX test/cpp_headers/nvme_zns.o 00:03:50.815 LINK doorbell_aers 00:03:50.815 CXX test/cpp_headers/nvmf_cmd.o 00:03:50.815 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:50.815 CXX test/cpp_headers/nvmf.o 00:03:50.815 CXX test/cpp_headers/nvmf_transport.o 00:03:50.815 CXX test/cpp_headers/nvmf_spec.o 00:03:50.815 CXX test/cpp_headers/opal.o 00:03:50.815 CXX test/cpp_headers/pci_ids.o 00:03:50.815 CXX test/cpp_headers/opal_spec.o 00:03:50.815 CXX test/cpp_headers/pipe.o 00:03:50.815 CXX test/cpp_headers/queue.o 00:03:50.815 CXX test/cpp_headers/reduce.o 00:03:50.815 CXX test/cpp_headers/rpc.o 00:03:50.815 CXX test/cpp_headers/scheduler.o 00:03:50.815 CXX test/cpp_headers/scsi.o 00:03:50.815 CXX test/cpp_headers/scsi_spec.o 00:03:50.815 CXX test/cpp_headers/sock.o 00:03:50.815 LINK nvme_compliance 00:03:50.815 CXX test/cpp_headers/stdinc.o 00:03:50.815 CXX test/cpp_headers/string.o 00:03:50.815 CXX test/cpp_headers/thread.o 00:03:50.815 CXX test/cpp_headers/trace.o 00:03:50.815 CXX test/cpp_headers/trace_parser.o 00:03:50.815 CXX test/cpp_headers/tree.o 00:03:50.815 CXX test/cpp_headers/ublk.o 00:03:50.815 CXX test/cpp_headers/util.o 00:03:51.074 CXX test/cpp_headers/uuid.o 00:03:51.074 CXX test/cpp_headers/version.o 00:03:51.074 CXX test/cpp_headers/vfio_user_pci.o 00:03:51.074 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.074 CXX test/cpp_headers/vhost.o 00:03:51.074 CXX test/cpp_headers/vmd.o 00:03:51.074 LINK fdp 00:03:51.074 CXX test/cpp_headers/xor.o 00:03:51.074 CXX test/cpp_headers/zipf.o 00:03:51.074 LINK memory_ut 00:03:52.008 LINK iscsi_fuzz 00:03:52.008 LINK cuse 00:03:55.302 LINK esnap 00:03:55.302 00:03:55.302 real 0m40.639s 00:03:55.302 user 7m33.258s 00:03:55.302 sys 1m50.191s 00:03:55.302 19:51:42 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:55.302 19:51:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:55.302 ************************************ 00:03:55.302 END TEST make 00:03:55.302 ************************************ 00:03:55.302 19:51:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:55.302 19:51:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:55.302 19:51:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:55.302 19:51:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:55.302 19:51:42 -- pm/common@44 -- $ pid=2956509 00:03:55.302 19:51:42 -- pm/common@50 -- $ kill -TERM 2956509 00:03:55.302 19:51:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:55.302 19:51:42 -- pm/common@44 -- $ pid=2956511 00:03:55.302 19:51:42 -- pm/common@50 -- $ kill -TERM 2956511 00:03:55.302 19:51:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:55.302 19:51:42 -- pm/common@44 -- $ pid=2956513 00:03:55.302 19:51:42 -- pm/common@50 -- $ kill -TERM 2956513 00:03:55.302 19:51:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:55.302 19:51:42 -- pm/common@44 -- $ pid=2956541 00:03:55.302 19:51:42 -- pm/common@50 -- $ sudo -E kill -TERM 2956541 00:03:55.302 19:51:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:55.302 19:51:42 -- nvmf/common.sh@7 -- # uname -s 00:03:55.302 19:51:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:55.302 19:51:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:55.302 19:51:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:55.302 19:51:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:55.302 19:51:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:55.302 19:51:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:55.302 19:51:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:55.302 19:51:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:55.302 19:51:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:55.302 19:51:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:55.302 19:51:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:55.302 19:51:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:55.302 19:51:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:55.302 19:51:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:55.302 19:51:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:55.302 19:51:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:55.302 19:51:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:55.302 19:51:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:55.302 19:51:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:55.302 19:51:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:55.302 19:51:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.302 19:51:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.302 19:51:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.302 19:51:42 -- paths/export.sh@5 -- # export PATH 00:03:55.302 19:51:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:55.302 19:51:42 -- nvmf/common.sh@47 -- # : 0 00:03:55.302 19:51:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:55.302 19:51:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:55.302 19:51:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:55.302 19:51:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:55.302 19:51:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:55.302 19:51:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:55.302 19:51:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:55.302 19:51:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:55.302 19:51:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:55.302 19:51:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:55.302 19:51:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:55.302 19:51:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:55.302 19:51:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:55.302 19:51:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:55.302 19:51:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:55.302 19:51:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:55.302 19:51:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:55.302 19:51:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:55.302 19:51:42 -- spdk/autotest.sh@48 -- # udevadm_pid=3032449 00:03:55.302 19:51:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:55.302 19:51:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:55.302 19:51:42 -- pm/common@17 -- # local monitor 00:03:55.302 19:51:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@21 -- # date +%s 00:03:55.302 19:51:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:55.302 19:51:42 -- pm/common@21 -- # date +%s 00:03:55.302 19:51:42 -- pm/common@25 -- # sleep 1 00:03:55.302 19:51:42 -- pm/common@21 -- # date +%s 00:03:55.302 19:51:42 -- pm/common@21 -- # date +%s 00:03:55.302 19:51:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893102 00:03:55.302 19:51:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893102 00:03:55.302 19:51:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893102 00:03:55.302 19:51:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720893102 00:03:55.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893102_collect-vmstat.pm.log 00:03:55.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893102_collect-cpu-load.pm.log 00:03:55.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893102_collect-cpu-temp.pm.log 00:03:55.302 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720893102_collect-bmc-pm.bmc.pm.log 00:03:56.232 19:51:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:56.232 19:51:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:56.232 19:51:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:56.232 19:51:43 -- common/autotest_common.sh@10 -- # set +x 00:03:56.232 19:51:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:56.232 19:51:43 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:56.232 19:51:43 -- common/autotest_common.sh@10 -- # set +x 00:03:56.232 19:51:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:56.232 19:51:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.232 19:51:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.232 19:51:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:56.232 19:51:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.232 19:51:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:56.232 19:51:43 -- common/autotest_common.sh@1451 -- # uname 00:03:56.232 19:51:43 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:56.232 19:51:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:56.232 19:51:43 -- common/autotest_common.sh@1471 -- # uname 00:03:56.232 19:51:43 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:56.232 19:51:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:56.232 19:51:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:56.232 19:51:43 -- spdk/autotest.sh@72 -- # hash lcov 00:03:56.232 19:51:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:56.232 19:51:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:56.232 --rc lcov_branch_coverage=1 00:03:56.232 --rc lcov_function_coverage=1 00:03:56.232 --rc genhtml_branch_coverage=1 00:03:56.232 --rc genhtml_function_coverage=1 00:03:56.232 --rc genhtml_legend=1 00:03:56.232 --rc geninfo_all_blocks=1 00:03:56.232 ' 00:03:56.233 19:51:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:56.233 --rc lcov_branch_coverage=1 00:03:56.233 --rc lcov_function_coverage=1 00:03:56.233 --rc genhtml_branch_coverage=1 00:03:56.233 --rc genhtml_function_coverage=1 00:03:56.233 --rc genhtml_legend=1 00:03:56.233 --rc geninfo_all_blocks=1 00:03:56.233 ' 00:03:56.233 19:51:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:56.233 --rc lcov_branch_coverage=1 00:03:56.233 --rc lcov_function_coverage=1 00:03:56.233 --rc genhtml_branch_coverage=1 00:03:56.233 --rc genhtml_function_coverage=1 00:03:56.233 --rc genhtml_legend=1 00:03:56.233 --rc geninfo_all_blocks=1 00:03:56.233 --no-external' 00:03:56.233 19:51:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:56.233 --rc lcov_branch_coverage=1 00:03:56.233 --rc lcov_function_coverage=1 00:03:56.233 --rc genhtml_branch_coverage=1 00:03:56.233 --rc genhtml_function_coverage=1 00:03:56.233 --rc genhtml_legend=1 00:03:56.233 --rc geninfo_all_blocks=1 00:03:56.233 --no-external' 00:03:56.233 19:51:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:56.489 lcov: LCOV version 1.14 00:03:56.490 19:51:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:11.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.365 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:26.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:26.294 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:26.295 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:26.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:26.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:26.296 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:29.593 19:52:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:29.593 19:52:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:29.593 19:52:16 -- common/autotest_common.sh@10 -- # set +x 00:04:29.593 19:52:16 -- spdk/autotest.sh@91 -- # rm -f 00:04:29.593 19:52:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:30.529 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:30.529 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:30.529 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:30.529 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:30.529 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:30.529 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:30.529 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:30.529 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:30.529 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:30.529 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:30.529 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:30.529 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:30.529 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:30.529 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:30.529 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:30.529 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:30.529 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:30.789 19:52:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:30.789 19:52:18 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:30.789 19:52:18 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:30.789 19:52:18 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:30.789 19:52:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:30.789 19:52:18 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:30.789 19:52:18 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:30.789 19:52:18 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.789 19:52:18 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:30.789 19:52:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:30.789 19:52:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.789 19:52:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:30.789 19:52:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:30.789 19:52:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:30.789 19:52:18 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.789 No valid GPT data, bailing 00:04:30.789 19:52:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.789 19:52:18 -- scripts/common.sh@391 -- # pt= 00:04:30.789 19:52:18 -- scripts/common.sh@392 -- # return 1 00:04:30.789 19:52:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.789 1+0 records in 00:04:30.789 1+0 records out 00:04:30.789 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00258503 s, 406 MB/s 00:04:30.789 19:52:18 -- spdk/autotest.sh@118 -- # sync 00:04:30.789 19:52:18 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:30.789 19:52:18 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:30.789 19:52:18 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:32.694 19:52:20 -- spdk/autotest.sh@124 -- # uname -s 00:04:32.694 19:52:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:32.694 19:52:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:32.694 19:52:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.694 19:52:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.694 19:52:20 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 ************************************ 00:04:32.694 START TEST setup.sh 00:04:32.694 ************************************ 00:04:32.694 19:52:20 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:32.694 * Looking for test storage... 00:04:32.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.694 19:52:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:32.694 19:52:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:32.694 19:52:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:32.694 19:52:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:32.694 19:52:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:32.694 19:52:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 ************************************ 00:04:32.694 START TEST acl 00:04:32.694 ************************************ 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:32.694 * Looking for test storage... 00:04:32.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.694 19:52:20 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:32.694 19:52:20 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:04:32.694 19:52:20 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:32.694 19:52:20 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:32.694 19:52:20 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:32.694 19:52:20 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:32.694 19:52:20 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:32.694 19:52:20 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.694 19:52:20 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.070 19:52:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:34.070 19:52:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:34.070 19:52:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:34.070 19:52:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:34.070 19:52:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.070 19:52:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:35.444 Hugepages 00:04:35.444 node hugesize free / total 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 00:04:35.444 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.444 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:35.445 19:52:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:35.445 19:52:22 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.445 19:52:22 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.445 19:52:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:35.445 ************************************ 00:04:35.445 START TEST denied 00:04:35.445 ************************************ 00:04:35.445 19:52:22 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:04:35.445 19:52:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:35.445 19:52:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:35.445 19:52:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:35.445 19:52:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.445 19:52:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.817 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.817 19:52:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.346 00:04:39.346 real 0m3.828s 00:04:39.346 user 0m1.109s 00:04:39.346 sys 0m1.831s 00:04:39.346 19:52:26 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:39.346 19:52:26 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:39.346 ************************************ 00:04:39.346 END TEST denied 00:04:39.346 ************************************ 00:04:39.346 19:52:26 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:39.346 19:52:26 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:39.346 19:52:26 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:39.346 19:52:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.346 ************************************ 00:04:39.346 START TEST allowed 00:04:39.346 ************************************ 00:04:39.346 19:52:26 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:04:39.346 19:52:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:39.346 19:52:26 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:39.346 19:52:26 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:39.346 19:52:26 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.346 19:52:26 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:41.900 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:41.900 19:52:29 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:41.900 19:52:29 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:41.900 19:52:29 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:41.900 19:52:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.900 19:52:29 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.296 00:04:43.296 real 0m3.833s 00:04:43.296 user 0m0.969s 00:04:43.296 sys 0m1.657s 00:04:43.296 19:52:30 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.296 19:52:30 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:43.296 ************************************ 00:04:43.296 END TEST allowed 00:04:43.296 ************************************ 00:04:43.296 00:04:43.296 real 0m10.328s 00:04:43.296 user 0m3.114s 00:04:43.296 sys 0m5.188s 00:04:43.296 19:52:30 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.296 19:52:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:43.296 ************************************ 00:04:43.296 END TEST acl 00:04:43.296 ************************************ 00:04:43.296 19:52:30 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:43.296 19:52:30 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.296 19:52:30 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.296 19:52:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.296 ************************************ 00:04:43.296 START TEST hugepages 00:04:43.296 ************************************ 00:04:43.296 19:52:30 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:43.296 * Looking for test storage... 00:04:43.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41732924 kB' 'MemAvailable: 45241296 kB' 'Buffers: 2704 kB' 'Cached: 12212848 kB' 'SwapCached: 0 kB' 'Active: 9220044 kB' 'Inactive: 3506552 kB' 'Active(anon): 8825692 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514536 kB' 'Mapped: 164924 kB' 'Shmem: 8314648 kB' 'KReclaimable: 202184 kB' 'Slab: 578396 kB' 'SReclaimable: 202184 kB' 'SUnreclaim: 376212 kB' 'KernelStack: 12784 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9945016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.296 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:43.297 19:52:30 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:43.297 19:52:30 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.297 19:52:30 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.297 19:52:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.297 ************************************ 00:04:43.297 START TEST default_setup 00:04:43.297 ************************************ 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:43.297 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.298 19:52:30 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.674 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:44.674 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:44.674 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:45.614 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.614 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43840612 kB' 'MemAvailable: 47348968 kB' 'Buffers: 2704 kB' 'Cached: 12212940 kB' 'SwapCached: 0 kB' 'Active: 9237736 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843384 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531820 kB' 'Mapped: 164884 kB' 'Shmem: 8314740 kB' 'KReclaimable: 202152 kB' 'Slab: 577960 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 375808 kB' 'KernelStack: 12880 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9965300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.615 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43841500 kB' 'MemAvailable: 47349856 kB' 'Buffers: 2704 kB' 'Cached: 12212948 kB' 'SwapCached: 0 kB' 'Active: 9237436 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843084 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531576 kB' 'Mapped: 164964 kB' 'Shmem: 8314748 kB' 'KReclaimable: 202152 kB' 'Slab: 577916 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 375764 kB' 'KernelStack: 12832 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9965684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.616 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.617 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43841924 kB' 'MemAvailable: 47350280 kB' 'Buffers: 2704 kB' 'Cached: 12212972 kB' 'SwapCached: 0 kB' 'Active: 9237472 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531656 kB' 'Mapped: 164904 kB' 'Shmem: 8314772 kB' 'KReclaimable: 202152 kB' 'Slab: 577948 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 375796 kB' 'KernelStack: 12800 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9966668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.618 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.619 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.620 nr_hugepages=1024 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.620 resv_hugepages=0 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.620 surplus_hugepages=0 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.620 anon_hugepages=0 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43842212 kB' 'MemAvailable: 47350568 kB' 'Buffers: 2704 kB' 'Cached: 12212972 kB' 'SwapCached: 0 kB' 'Active: 9240196 kB' 'Inactive: 3506552 kB' 'Active(anon): 8845844 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534424 kB' 'Mapped: 165340 kB' 'Shmem: 8314772 kB' 'KReclaimable: 202152 kB' 'Slab: 577948 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 375796 kB' 'KernelStack: 12800 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9969724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.620 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.621 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26741552 kB' 'MemUsed: 6088332 kB' 'SwapCached: 0 kB' 'Active: 2706216 kB' 'Inactive: 108416 kB' 'Active(anon): 2595328 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2541696 kB' 'Mapped: 33304 kB' 'AnonPages: 276204 kB' 'Shmem: 2322392 kB' 'KernelStack: 7288 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313212 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.622 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.623 node0=1024 expecting 1024 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.623 00:04:45.623 real 0m2.424s 00:04:45.623 user 0m0.666s 00:04:45.623 sys 0m0.937s 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.623 19:52:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:45.623 ************************************ 00:04:45.623 END TEST default_setup 00:04:45.623 ************************************ 00:04:45.623 19:52:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:45.623 19:52:33 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.623 19:52:33 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.623 19:52:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.623 ************************************ 00:04:45.623 START TEST per_node_1G_alloc 00:04:45.623 ************************************ 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.623 19:52:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.006 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.006 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.006 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.006 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.006 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.006 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.006 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.006 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.006 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.006 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.006 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.006 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.006 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.006 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.006 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.006 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.006 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43823464 kB' 'MemAvailable: 47331820 kB' 'Buffers: 2704 kB' 'Cached: 12213060 kB' 'SwapCached: 0 kB' 'Active: 9237928 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843576 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532144 kB' 'Mapped: 165020 kB' 'Shmem: 8314860 kB' 'KReclaimable: 202152 kB' 'Slab: 578224 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376072 kB' 'KernelStack: 12816 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9965912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.006 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.007 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826076 kB' 'MemAvailable: 47334432 kB' 'Buffers: 2704 kB' 'Cached: 12213060 kB' 'SwapCached: 0 kB' 'Active: 9237596 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843244 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531792 kB' 'Mapped: 165004 kB' 'Shmem: 8314860 kB' 'KReclaimable: 202152 kB' 'Slab: 578200 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376048 kB' 'KernelStack: 12784 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9965928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.008 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.009 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43825320 kB' 'MemAvailable: 47333676 kB' 'Buffers: 2704 kB' 'Cached: 12213084 kB' 'SwapCached: 0 kB' 'Active: 9237516 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531656 kB' 'Mapped: 164928 kB' 'Shmem: 8314884 kB' 'KReclaimable: 202152 kB' 'Slab: 578216 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376064 kB' 'KernelStack: 12816 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9965952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.010 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.011 nr_hugepages=1024 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.011 resv_hugepages=0 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.011 surplus_hugepages=0 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.011 anon_hugepages=0 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.011 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43824312 kB' 'MemAvailable: 47332668 kB' 'Buffers: 2704 kB' 'Cached: 12213104 kB' 'SwapCached: 0 kB' 'Active: 9237540 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843188 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531656 kB' 'Mapped: 164928 kB' 'Shmem: 8314904 kB' 'KReclaimable: 202152 kB' 'Slab: 578216 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376064 kB' 'KernelStack: 12816 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9965976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.012 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27781652 kB' 'MemUsed: 5048232 kB' 'SwapCached: 0 kB' 'Active: 2706148 kB' 'Inactive: 108416 kB' 'Active(anon): 2595260 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2541788 kB' 'Mapped: 32612 kB' 'AnonPages: 276000 kB' 'Shmem: 2322484 kB' 'KernelStack: 7288 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313336 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.013 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.014 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16042948 kB' 'MemUsed: 11668876 kB' 'SwapCached: 0 kB' 'Active: 6531116 kB' 'Inactive: 3398136 kB' 'Active(anon): 6247652 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9674020 kB' 'Mapped: 132316 kB' 'AnonPages: 255380 kB' 'Shmem: 5992420 kB' 'KernelStack: 5528 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110700 kB' 'Slab: 264880 kB' 'SReclaimable: 110700 kB' 'SUnreclaim: 154180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.015 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.016 node0=512 expecting 512 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:47.016 node1=512 expecting 512 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.016 00:04:47.016 real 0m1.362s 00:04:47.016 user 0m0.553s 00:04:47.016 sys 0m0.767s 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.016 19:52:34 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 ************************************ 00:04:47.016 END TEST per_node_1G_alloc 00:04:47.016 ************************************ 00:04:47.016 19:52:34 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:47.016 19:52:34 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.016 19:52:34 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.016 19:52:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.016 ************************************ 00:04:47.016 START TEST even_2G_alloc 00:04:47.016 ************************************ 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.016 19:52:34 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.396 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.396 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:48.396 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.396 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.396 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.396 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.396 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.396 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.396 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:48.396 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:48.396 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:48.396 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:48.396 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:48.396 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:48.396 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:48.396 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:48.396 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.396 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826836 kB' 'MemAvailable: 47335192 kB' 'Buffers: 2704 kB' 'Cached: 12213204 kB' 'SwapCached: 0 kB' 'Active: 9237676 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531536 kB' 'Mapped: 165052 kB' 'Shmem: 8315004 kB' 'KReclaimable: 202152 kB' 'Slab: 578160 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376008 kB' 'KernelStack: 12800 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9966212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.397 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826332 kB' 'MemAvailable: 47334688 kB' 'Buffers: 2704 kB' 'Cached: 12213204 kB' 'SwapCached: 0 kB' 'Active: 9238584 kB' 'Inactive: 3506552 kB' 'Active(anon): 8844232 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532480 kB' 'Mapped: 165052 kB' 'Shmem: 8315004 kB' 'KReclaimable: 202152 kB' 'Slab: 578152 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376000 kB' 'KernelStack: 12864 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9966228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.398 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.399 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826428 kB' 'MemAvailable: 47334784 kB' 'Buffers: 2704 kB' 'Cached: 12213224 kB' 'SwapCached: 0 kB' 'Active: 9238088 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843736 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531944 kB' 'Mapped: 164940 kB' 'Shmem: 8315024 kB' 'KReclaimable: 202152 kB' 'Slab: 578176 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376024 kB' 'KernelStack: 12816 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9966252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.400 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.401 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.402 nr_hugepages=1024 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.402 resv_hugepages=0 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.402 surplus_hugepages=0 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.402 anon_hugepages=0 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826428 kB' 'MemAvailable: 47334784 kB' 'Buffers: 2704 kB' 'Cached: 12213244 kB' 'SwapCached: 0 kB' 'Active: 9237948 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843596 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531760 kB' 'Mapped: 164940 kB' 'Shmem: 8315044 kB' 'KReclaimable: 202152 kB' 'Slab: 578176 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 376024 kB' 'KernelStack: 12848 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9966272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.402 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.403 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27787896 kB' 'MemUsed: 5041988 kB' 'SwapCached: 0 kB' 'Active: 2707048 kB' 'Inactive: 108416 kB' 'Active(anon): 2596160 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2541944 kB' 'Mapped: 32612 kB' 'AnonPages: 276736 kB' 'Shmem: 2322640 kB' 'KernelStack: 7304 kB' 'PageTables: 4880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313292 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.404 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16038532 kB' 'MemUsed: 11673292 kB' 'SwapCached: 0 kB' 'Active: 6530880 kB' 'Inactive: 3398136 kB' 'Active(anon): 6247416 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9674028 kB' 'Mapped: 132328 kB' 'AnonPages: 254988 kB' 'Shmem: 5992428 kB' 'KernelStack: 5528 kB' 'PageTables: 3072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110700 kB' 'Slab: 264884 kB' 'SReclaimable: 110700 kB' 'SUnreclaim: 154184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.405 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:48.406 node0=512 expecting 512 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:48.406 node1=512 expecting 512 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:48.406 00:04:48.406 real 0m1.409s 00:04:48.406 user 0m0.590s 00:04:48.406 sys 0m0.782s 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:48.406 19:52:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.406 ************************************ 00:04:48.406 END TEST even_2G_alloc 00:04:48.406 ************************************ 00:04:48.664 19:52:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:48.664 19:52:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:48.664 19:52:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:48.664 19:52:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.664 ************************************ 00:04:48.664 START TEST odd_alloc 00:04:48.664 ************************************ 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.664 19:52:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.601 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.601 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.601 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.601 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.601 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.601 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.601 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.601 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.601 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.601 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.601 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.601 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.601 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.601 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.601 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.601 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.601 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.864 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43822892 kB' 'MemAvailable: 47331248 kB' 'Buffers: 2704 kB' 'Cached: 12213324 kB' 'SwapCached: 0 kB' 'Active: 9234600 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840248 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528268 kB' 'Mapped: 164168 kB' 'Shmem: 8315124 kB' 'KReclaimable: 202152 kB' 'Slab: 578036 kB' 'SReclaimable: 202152 kB' 'SUnreclaim: 375884 kB' 'KernelStack: 12752 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9952052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.865 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43822160 kB' 'MemAvailable: 47330500 kB' 'Buffers: 2704 kB' 'Cached: 12213328 kB' 'SwapCached: 0 kB' 'Active: 9235080 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840728 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528792 kB' 'Mapped: 164156 kB' 'Shmem: 8315128 kB' 'KReclaimable: 202120 kB' 'Slab: 577976 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375856 kB' 'KernelStack: 12816 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9951700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.866 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.867 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43822672 kB' 'MemAvailable: 47331012 kB' 'Buffers: 2704 kB' 'Cached: 12213352 kB' 'SwapCached: 0 kB' 'Active: 9234172 kB' 'Inactive: 3506552 kB' 'Active(anon): 8839820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527852 kB' 'Mapped: 164080 kB' 'Shmem: 8315152 kB' 'KReclaimable: 202120 kB' 'Slab: 577924 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375804 kB' 'KernelStack: 12752 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9951724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.868 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:49.869 nr_hugepages=1025 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.869 resv_hugepages=0 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.869 surplus_hugepages=0 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.869 anon_hugepages=0 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.869 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43822672 kB' 'MemAvailable: 47331012 kB' 'Buffers: 2704 kB' 'Cached: 12213376 kB' 'SwapCached: 0 kB' 'Active: 9234488 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840136 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528168 kB' 'Mapped: 164080 kB' 'Shmem: 8315176 kB' 'KReclaimable: 202120 kB' 'Slab: 577924 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375804 kB' 'KernelStack: 12800 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9952112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.870 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27780516 kB' 'MemUsed: 5049368 kB' 'SwapCached: 0 kB' 'Active: 2706408 kB' 'Inactive: 108416 kB' 'Active(anon): 2595520 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2541996 kB' 'Mapped: 31884 kB' 'AnonPages: 276048 kB' 'Shmem: 2322692 kB' 'KernelStack: 7256 kB' 'PageTables: 4672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313252 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.871 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.872 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16042260 kB' 'MemUsed: 11669564 kB' 'SwapCached: 0 kB' 'Active: 6528272 kB' 'Inactive: 3398136 kB' 'Active(anon): 6244808 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9674124 kB' 'Mapped: 132204 kB' 'AnonPages: 252376 kB' 'Shmem: 5992524 kB' 'KernelStack: 5528 kB' 'PageTables: 2984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110668 kB' 'Slab: 264672 kB' 'SReclaimable: 110668 kB' 'SUnreclaim: 154004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.873 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:49.874 node0=512 expecting 513 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:49.874 node1=513 expecting 512 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:49.874 00:04:49.874 real 0m1.380s 00:04:49.874 user 0m0.591s 00:04:49.874 sys 0m0.749s 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.874 19:52:37 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.874 ************************************ 00:04:49.874 END TEST odd_alloc 00:04:49.874 ************************************ 00:04:49.874 19:52:37 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:49.874 19:52:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.874 19:52:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.874 19:52:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.874 ************************************ 00:04:49.874 START TEST custom_alloc 00:04:49.874 ************************************ 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:49.874 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.132 19:52:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.067 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.067 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:51.067 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.067 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.067 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.067 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.067 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.067 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.067 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.067 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:51.067 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:51.067 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:51.067 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:51.067 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:51.067 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:51.067 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:51.067 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42771120 kB' 'MemAvailable: 46279460 kB' 'Buffers: 2704 kB' 'Cached: 12213464 kB' 'SwapCached: 0 kB' 'Active: 9234696 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840344 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528288 kB' 'Mapped: 164112 kB' 'Shmem: 8315264 kB' 'KReclaimable: 202120 kB' 'Slab: 577852 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375732 kB' 'KernelStack: 12816 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9952316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.332 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.333 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42771228 kB' 'MemAvailable: 46279568 kB' 'Buffers: 2704 kB' 'Cached: 12213468 kB' 'SwapCached: 0 kB' 'Active: 9234720 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840368 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528368 kB' 'Mapped: 164088 kB' 'Shmem: 8315268 kB' 'KReclaimable: 202120 kB' 'Slab: 577956 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375836 kB' 'KernelStack: 12832 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9952332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.334 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42771228 kB' 'MemAvailable: 46279568 kB' 'Buffers: 2704 kB' 'Cached: 12213488 kB' 'SwapCached: 0 kB' 'Active: 9234516 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840164 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528120 kB' 'Mapped: 164088 kB' 'Shmem: 8315288 kB' 'KReclaimable: 202120 kB' 'Slab: 577948 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375828 kB' 'KernelStack: 12816 kB' 'PageTables: 7768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9952352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.335 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.336 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:51.337 nr_hugepages=1536 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.337 resv_hugepages=0 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.337 surplus_hugepages=0 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.337 anon_hugepages=0 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.337 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42771660 kB' 'MemAvailable: 46280000 kB' 'Buffers: 2704 kB' 'Cached: 12213508 kB' 'SwapCached: 0 kB' 'Active: 9235480 kB' 'Inactive: 3506552 kB' 'Active(anon): 8841128 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529088 kB' 'Mapped: 164524 kB' 'Shmem: 8315308 kB' 'KReclaimable: 202120 kB' 'Slab: 577948 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375828 kB' 'KernelStack: 12784 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9953864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.338 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27773736 kB' 'MemUsed: 5056148 kB' 'SwapCached: 0 kB' 'Active: 2710008 kB' 'Inactive: 108416 kB' 'Active(anon): 2599120 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2542008 kB' 'Mapped: 32312 kB' 'AnonPages: 279592 kB' 'Shmem: 2322704 kB' 'KernelStack: 7256 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313300 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221848 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.339 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.340 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14993892 kB' 'MemUsed: 12717932 kB' 'SwapCached: 0 kB' 'Active: 6528660 kB' 'Inactive: 3398136 kB' 'Active(anon): 6245196 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3398136 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9674244 kB' 'Mapped: 132364 kB' 'AnonPages: 252644 kB' 'Shmem: 5992644 kB' 'KernelStack: 5528 kB' 'PageTables: 3032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 110668 kB' 'Slab: 264648 kB' 'SReclaimable: 110668 kB' 'SUnreclaim: 153980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.341 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.342 node0=512 expecting 512 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:51.342 node1=1024 expecting 1024 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:51.342 00:04:51.342 real 0m1.380s 00:04:51.342 user 0m0.576s 00:04:51.342 sys 0m0.762s 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.342 19:52:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.342 ************************************ 00:04:51.342 END TEST custom_alloc 00:04:51.342 ************************************ 00:04:51.342 19:52:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:51.342 19:52:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:51.342 19:52:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.342 19:52:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.342 ************************************ 00:04:51.342 START TEST no_shrink_alloc 00:04:51.342 ************************************ 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.342 19:52:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.725 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.725 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.725 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.725 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.725 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.725 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.725 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.725 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.725 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.725 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.725 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.725 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.725 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.725 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.725 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.725 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.725 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43806252 kB' 'MemAvailable: 47314592 kB' 'Buffers: 2704 kB' 'Cached: 12213592 kB' 'SwapCached: 0 kB' 'Active: 9234540 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840188 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527980 kB' 'Mapped: 164200 kB' 'Shmem: 8315392 kB' 'KReclaimable: 202120 kB' 'Slab: 577820 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375700 kB' 'KernelStack: 12800 kB' 'PageTables: 7700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.725 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.726 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43806656 kB' 'MemAvailable: 47314996 kB' 'Buffers: 2704 kB' 'Cached: 12213596 kB' 'SwapCached: 0 kB' 'Active: 9234664 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840312 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528104 kB' 'Mapped: 164184 kB' 'Shmem: 8315396 kB' 'KReclaimable: 202120 kB' 'Slab: 577820 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375700 kB' 'KernelStack: 12800 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.727 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.728 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43806996 kB' 'MemAvailable: 47315336 kB' 'Buffers: 2704 kB' 'Cached: 12213612 kB' 'SwapCached: 0 kB' 'Active: 9234372 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527792 kB' 'Mapped: 164184 kB' 'Shmem: 8315412 kB' 'KReclaimable: 202120 kB' 'Slab: 577812 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375692 kB' 'KernelStack: 12752 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.729 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.730 nr_hugepages=1024 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.730 resv_hugepages=0 00:04:52.730 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.730 surplus_hugepages=0 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.731 anon_hugepages=0 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43807300 kB' 'MemAvailable: 47315640 kB' 'Buffers: 2704 kB' 'Cached: 12213636 kB' 'SwapCached: 0 kB' 'Active: 9234472 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527836 kB' 'Mapped: 164108 kB' 'Shmem: 8315436 kB' 'KReclaimable: 202120 kB' 'Slab: 577792 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375672 kB' 'KernelStack: 12752 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.731 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.732 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26727888 kB' 'MemUsed: 6101996 kB' 'SwapCached: 0 kB' 'Active: 2705888 kB' 'Inactive: 108416 kB' 'Active(anon): 2595000 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2542012 kB' 'Mapped: 31884 kB' 'AnonPages: 275412 kB' 'Shmem: 2322708 kB' 'KernelStack: 7224 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313244 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.733 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.734 node0=1024 expecting 1024 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.734 19:52:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.116 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.116 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.116 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.116 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.116 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.116 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.116 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.116 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.116 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.116 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.116 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.116 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.116 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.116 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.116 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.116 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.116 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.116 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43827348 kB' 'MemAvailable: 47335688 kB' 'Buffers: 2704 kB' 'Cached: 12213700 kB' 'SwapCached: 0 kB' 'Active: 9234920 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840568 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528228 kB' 'Mapped: 164128 kB' 'Shmem: 8315500 kB' 'KReclaimable: 202120 kB' 'Slab: 577760 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375640 kB' 'KernelStack: 12784 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.116 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43825844 kB' 'MemAvailable: 47334184 kB' 'Buffers: 2704 kB' 'Cached: 12213704 kB' 'SwapCached: 0 kB' 'Active: 9234748 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840396 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528032 kB' 'Mapped: 164112 kB' 'Shmem: 8315504 kB' 'KReclaimable: 202120 kB' 'Slab: 577776 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375656 kB' 'KernelStack: 12800 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.117 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.118 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826748 kB' 'MemAvailable: 47335088 kB' 'Buffers: 2704 kB' 'Cached: 12213704 kB' 'SwapCached: 0 kB' 'Active: 9234472 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527756 kB' 'Mapped: 164112 kB' 'Shmem: 8315504 kB' 'KReclaimable: 202120 kB' 'Slab: 577776 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375656 kB' 'KernelStack: 12800 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.119 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.120 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.121 nr_hugepages=1024 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.121 resv_hugepages=0 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.121 surplus_hugepages=0 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.121 anon_hugepages=0 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43826984 kB' 'MemAvailable: 47335324 kB' 'Buffers: 2704 kB' 'Cached: 12213704 kB' 'SwapCached: 0 kB' 'Active: 9235172 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528456 kB' 'Mapped: 164112 kB' 'Shmem: 8315504 kB' 'KReclaimable: 202120 kB' 'Slab: 577776 kB' 'SReclaimable: 202120 kB' 'SUnreclaim: 375656 kB' 'KernelStack: 12832 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9952760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 36480 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1902172 kB' 'DirectMap2M: 15843328 kB' 'DirectMap1G: 51380224 kB' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.121 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.122 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26733120 kB' 'MemUsed: 6096764 kB' 'SwapCached: 0 kB' 'Active: 2706304 kB' 'Inactive: 108416 kB' 'Active(anon): 2595416 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2542016 kB' 'Mapped: 31880 kB' 'AnonPages: 275664 kB' 'Shmem: 2322712 kB' 'KernelStack: 7240 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91452 kB' 'Slab: 313252 kB' 'SReclaimable: 91452 kB' 'SUnreclaim: 221800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.123 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:54.124 node0=1024 expecting 1024 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:54.124 00:04:54.124 real 0m2.731s 00:04:54.124 user 0m1.129s 00:04:54.124 sys 0m1.523s 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.124 19:52:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:54.124 ************************************ 00:04:54.124 END TEST no_shrink_alloc 00:04:54.124 ************************************ 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:54.124 19:52:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:54.124 00:04:54.124 real 0m11.088s 00:04:54.124 user 0m4.277s 00:04:54.125 sys 0m5.772s 00:04:54.125 19:52:41 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.125 19:52:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:54.125 ************************************ 00:04:54.125 END TEST hugepages 00:04:54.125 ************************************ 00:04:54.125 19:52:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:54.125 19:52:41 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:54.125 19:52:41 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.125 19:52:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:54.125 ************************************ 00:04:54.125 START TEST driver 00:04:54.125 ************************************ 00:04:54.125 19:52:41 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:54.383 * Looking for test storage... 00:04:54.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:54.383 19:52:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:54.383 19:52:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:54.383 19:52:41 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.932 19:52:44 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:56.932 19:52:44 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.932 19:52:44 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.932 19:52:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:56.932 ************************************ 00:04:56.932 START TEST guess_driver 00:04:56.932 ************************************ 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:56.932 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:56.932 Looking for driver=vfio-pci 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.932 19:52:44 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.867 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.868 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.125 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.125 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.125 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.125 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.125 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.125 19:52:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.060 19:52:46 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:01.639 00:05:01.639 real 0m4.612s 00:05:01.639 user 0m1.020s 00:05:01.639 sys 0m1.716s 00:05:01.639 19:52:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.639 19:52:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 ************************************ 00:05:01.639 END TEST guess_driver 00:05:01.639 ************************************ 00:05:01.639 00:05:01.639 real 0m7.191s 00:05:01.639 user 0m1.602s 00:05:01.639 sys 0m2.730s 00:05:01.639 19:52:48 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.639 19:52:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 ************************************ 00:05:01.639 END TEST driver 00:05:01.639 ************************************ 00:05:01.639 19:52:48 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:01.639 19:52:48 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:01.639 19:52:48 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.639 19:52:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.639 ************************************ 00:05:01.639 START TEST devices 00:05:01.639 ************************************ 00:05:01.639 19:52:48 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:01.639 * Looking for test storage... 00:05:01.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:01.639 19:52:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:01.639 19:52:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:01.639 19:52:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.639 19:52:49 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:03.017 19:52:50 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:03.017 19:52:50 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:03.017 No valid GPT data, bailing 00:05:03.017 19:52:50 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:03.017 19:52:50 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:03.017 19:52:50 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:03.017 19:52:50 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:03.017 19:52:50 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:03.018 19:52:50 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:03.018 19:52:50 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:03.018 19:52:50 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:03.018 19:52:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:03.018 ************************************ 00:05:03.018 START TEST nvme_mount 00:05:03.018 ************************************ 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:03.018 19:52:50 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:03.964 Creating new GPT entries in memory. 00:05:03.964 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.964 other utilities. 00:05:03.964 19:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.964 19:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.964 19:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.964 19:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.964 19:52:51 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:05.339 Creating new GPT entries in memory. 00:05:05.339 The operation has completed successfully. 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3052531 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.339 19:52:52 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.273 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:06.532 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:06.532 19:52:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:06.791 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:06.791 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:06.791 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:06.791 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.791 19:52:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.725 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.726 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.984 19:52:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.360 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:09.361 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:09.361 00:05:09.361 real 0m6.233s 00:05:09.361 user 0m1.410s 00:05:09.361 sys 0m2.404s 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.361 19:52:56 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:09.361 ************************************ 00:05:09.361 END TEST nvme_mount 00:05:09.361 ************************************ 00:05:09.361 19:52:56 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:09.361 19:52:56 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.361 19:52:56 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.361 19:52:56 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.361 ************************************ 00:05:09.361 START TEST dm_mount 00:05:09.361 ************************************ 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.361 19:52:56 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:10.298 Creating new GPT entries in memory. 00:05:10.298 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.298 other utilities. 00:05:10.298 19:52:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.298 19:52:57 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.298 19:52:57 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.298 19:52:57 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.298 19:52:57 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:11.678 Creating new GPT entries in memory. 00:05:11.678 The operation has completed successfully. 00:05:11.678 19:52:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.678 19:52:58 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.678 19:52:58 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:11.678 19:52:58 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:11.678 19:52:58 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:12.616 The operation has completed successfully. 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3054912 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:12.616 19:52:59 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.616 19:53:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.551 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:13.552 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:13.810 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:13.811 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:13.811 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.811 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.811 19:53:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:13.811 19:53:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.811 19:53:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.745 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.746 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:15.004 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:15.004 00:05:15.004 real 0m5.772s 00:05:15.004 user 0m0.996s 00:05:15.004 sys 0m1.647s 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.004 19:53:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:15.004 ************************************ 00:05:15.004 END TEST dm_mount 00:05:15.004 ************************************ 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.262 19:53:02 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.541 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.541 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.541 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.541 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.541 19:53:02 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:15.541 00:05:15.541 real 0m13.960s 00:05:15.541 user 0m3.110s 00:05:15.541 sys 0m5.063s 00:05:15.541 19:53:02 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.541 19:53:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:15.541 ************************************ 00:05:15.541 END TEST devices 00:05:15.541 ************************************ 00:05:15.541 00:05:15.541 real 0m42.803s 00:05:15.541 user 0m12.200s 00:05:15.541 sys 0m18.907s 00:05:15.541 19:53:02 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.541 19:53:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:15.541 ************************************ 00:05:15.541 END TEST setup.sh 00:05:15.541 ************************************ 00:05:15.541 19:53:02 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:16.921 Hugepages 00:05:16.921 node hugesize free / total 00:05:16.921 node0 1048576kB 0 / 0 00:05:16.921 node0 2048kB 2048 / 2048 00:05:16.921 node1 1048576kB 0 / 0 00:05:16.921 node1 2048kB 0 / 0 00:05:16.921 00:05:16.921 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:16.921 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:16.921 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:16.921 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:16.921 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:16.921 19:53:04 -- spdk/autotest.sh@130 -- # uname -s 00:05:16.921 19:53:04 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:16.921 19:53:04 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:16.921 19:53:04 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.856 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:17.856 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.856 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:17.856 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:17.856 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:18.114 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:18.114 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:18.114 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:18.114 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:18.114 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:19.050 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:19.050 19:53:06 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:19.985 19:53:07 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:19.985 19:53:07 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:19.985 19:53:07 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:19.985 19:53:07 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:19.985 19:53:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:19.985 19:53:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:19.985 19:53:07 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:19.985 19:53:07 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:19.985 19:53:07 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:20.244 19:53:07 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:20.244 19:53:07 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:20.244 19:53:07 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:21.179 Waiting for block devices as requested 00:05:21.179 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:21.439 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:21.439 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:21.699 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:21.699 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:21.699 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:21.699 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:21.959 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:21.959 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:21.959 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:21.959 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:22.217 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:22.217 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:22.217 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:22.217 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:22.476 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:22.476 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:22.476 19:53:10 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:22.476 19:53:10 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:05:22.476 19:53:10 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:22.476 19:53:10 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:22.476 19:53:10 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:22.476 19:53:10 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:22.476 19:53:10 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:05:22.476 19:53:10 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:22.476 19:53:10 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:22.476 19:53:10 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:22.476 19:53:10 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:22.476 19:53:10 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:22.476 19:53:10 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:22.476 19:53:10 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:22.476 19:53:10 -- common/autotest_common.sh@1553 -- # continue 00:05:22.476 19:53:10 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:22.476 19:53:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:22.476 19:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.734 19:53:10 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:22.734 19:53:10 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:22.734 19:53:10 -- common/autotest_common.sh@10 -- # set +x 00:05:22.734 19:53:10 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:23.670 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.670 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:23.670 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:23.670 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:23.670 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:23.670 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:23.928 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:23.928 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:23.928 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:23.928 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:24.864 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:24.864 19:53:12 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:24.864 19:53:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.864 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:05:24.864 19:53:12 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:24.864 19:53:12 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:24.864 19:53:12 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.864 19:53:12 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:24.864 19:53:12 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:24.864 19:53:12 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:24.864 19:53:12 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:24.864 19:53:12 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:24.864 19:53:12 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.864 19:53:12 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:24.864 19:53:12 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:25.122 19:53:12 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:25.122 19:53:12 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:05:25.122 19:53:12 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:25.122 19:53:12 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:25.122 19:53:12 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:05:25.122 19:53:12 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:25.122 19:53:12 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:05:25.122 19:53:12 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:05:25.122 19:53:12 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:05:25.122 19:53:12 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3060086 00:05:25.122 19:53:12 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.123 19:53:12 -- common/autotest_common.sh@1594 -- # waitforlisten 3060086 00:05:25.123 19:53:12 -- common/autotest_common.sh@827 -- # '[' -z 3060086 ']' 00:05:25.123 19:53:12 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.123 19:53:12 -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.123 19:53:12 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.123 19:53:12 -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.123 19:53:12 -- common/autotest_common.sh@10 -- # set +x 00:05:25.123 [2024-07-13 19:53:12.588280] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:25.123 [2024-07-13 19:53:12.588386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3060086 ] 00:05:25.123 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.123 [2024-07-13 19:53:12.651576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.123 [2024-07-13 19:53:12.741438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.379 19:53:13 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.379 19:53:13 -- common/autotest_common.sh@860 -- # return 0 00:05:25.379 19:53:13 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:05:25.379 19:53:13 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:05:25.379 19:53:13 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:28.659 nvme0n1 00:05:28.659 19:53:16 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:28.659 [2024-07-13 19:53:16.302063] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:28.659 [2024-07-13 19:53:16.302111] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:28.659 request: 00:05:28.659 { 00:05:28.659 "nvme_ctrlr_name": "nvme0", 00:05:28.659 "password": "test", 00:05:28.659 "method": "bdev_nvme_opal_revert", 00:05:28.659 "req_id": 1 00:05:28.659 } 00:05:28.659 Got JSON-RPC error response 00:05:28.659 response: 00:05:28.659 { 00:05:28.660 "code": -32603, 00:05:28.660 "message": "Internal error" 00:05:28.660 } 00:05:28.918 19:53:16 -- common/autotest_common.sh@1600 -- # true 00:05:28.919 19:53:16 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:05:28.919 19:53:16 -- common/autotest_common.sh@1604 -- # killprocess 3060086 00:05:28.919 19:53:16 -- common/autotest_common.sh@946 -- # '[' -z 3060086 ']' 00:05:28.919 19:53:16 -- common/autotest_common.sh@950 -- # kill -0 3060086 00:05:28.919 19:53:16 -- common/autotest_common.sh@951 -- # uname 00:05:28.919 19:53:16 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.919 19:53:16 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3060086 00:05:28.919 19:53:16 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.919 19:53:16 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.919 19:53:16 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3060086' 00:05:28.919 killing process with pid 3060086 00:05:28.919 19:53:16 -- common/autotest_common.sh@965 -- # kill 3060086 00:05:28.919 19:53:16 -- common/autotest_common.sh@970 -- # wait 3060086 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.919 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:28.920 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:30.821 19:53:18 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:30.821 19:53:18 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:30.821 19:53:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:30.821 19:53:18 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:30.821 19:53:18 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:30.821 19:53:18 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:30.821 19:53:18 -- common/autotest_common.sh@10 -- # set +x 00:05:30.821 19:53:18 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:30.821 19:53:18 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:30.821 19:53:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.821 19:53:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.821 19:53:18 -- common/autotest_common.sh@10 -- # set +x 00:05:30.821 ************************************ 00:05:30.821 START TEST env 00:05:30.821 ************************************ 00:05:30.821 19:53:18 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:30.821 * Looking for test storage... 00:05:30.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:30.821 19:53:18 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.821 19:53:18 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.821 19:53:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.821 19:53:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.821 ************************************ 00:05:30.821 START TEST env_memory 00:05:30.821 ************************************ 00:05:30.821 19:53:18 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:30.821 00:05:30.821 00:05:30.821 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.821 http://cunit.sourceforge.net/ 00:05:30.821 00:05:30.821 00:05:30.821 Suite: memory 00:05:30.821 Test: alloc and free memory map ...[2024-07-13 19:53:18.239158] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:30.821 passed 00:05:30.821 Test: mem map translation ...[2024-07-13 19:53:18.258927] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:30.821 [2024-07-13 19:53:18.258948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:30.821 [2024-07-13 19:53:18.258998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:30.821 [2024-07-13 19:53:18.259009] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:30.821 passed 00:05:30.821 Test: mem map registration ...[2024-07-13 19:53:18.299573] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:30.821 [2024-07-13 19:53:18.299592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:30.821 passed 00:05:30.821 Test: mem map adjacent registrations ...passed 00:05:30.821 00:05:30.821 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.821 suites 1 1 n/a 0 0 00:05:30.821 tests 4 4 4 0 0 00:05:30.821 asserts 152 152 152 0 n/a 00:05:30.821 00:05:30.821 Elapsed time = 0.140 seconds 00:05:30.821 00:05:30.821 real 0m0.148s 00:05:30.821 user 0m0.141s 00:05:30.821 sys 0m0.006s 00:05:30.821 19:53:18 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.821 19:53:18 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:30.821 ************************************ 00:05:30.821 END TEST env_memory 00:05:30.821 ************************************ 00:05:30.821 19:53:18 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.821 19:53:18 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:30.821 19:53:18 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.821 19:53:18 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.821 ************************************ 00:05:30.821 START TEST env_vtophys 00:05:30.821 ************************************ 00:05:30.821 19:53:18 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:30.821 EAL: lib.eal log level changed from notice to debug 00:05:30.821 EAL: Detected lcore 0 as core 0 on socket 0 00:05:30.821 EAL: Detected lcore 1 as core 1 on socket 0 00:05:30.821 EAL: Detected lcore 2 as core 2 on socket 0 00:05:30.821 EAL: Detected lcore 3 as core 3 on socket 0 00:05:30.821 EAL: Detected lcore 4 as core 4 on socket 0 00:05:30.821 EAL: Detected lcore 5 as core 5 on socket 0 00:05:30.821 EAL: Detected lcore 6 as core 8 on socket 0 00:05:30.821 EAL: Detected lcore 7 as core 9 on socket 0 00:05:30.821 EAL: Detected lcore 8 as core 10 on socket 0 00:05:30.821 EAL: Detected lcore 9 as core 11 on socket 0 00:05:30.821 EAL: Detected lcore 10 as core 12 on socket 0 00:05:30.821 EAL: Detected lcore 11 as core 13 on socket 0 00:05:30.821 EAL: Detected lcore 12 as core 0 on socket 1 00:05:30.821 EAL: Detected lcore 13 as core 1 on socket 1 00:05:30.821 EAL: Detected lcore 14 as core 2 on socket 1 00:05:30.821 EAL: Detected lcore 15 as core 3 on socket 1 00:05:30.821 EAL: Detected lcore 16 as core 4 on socket 1 00:05:30.821 EAL: Detected lcore 17 as core 5 on socket 1 00:05:30.821 EAL: Detected lcore 18 as core 8 on socket 1 00:05:30.821 EAL: Detected lcore 19 as core 9 on socket 1 00:05:30.821 EAL: Detected lcore 20 as core 10 on socket 1 00:05:30.821 EAL: Detected lcore 21 as core 11 on socket 1 00:05:30.821 EAL: Detected lcore 22 as core 12 on socket 1 00:05:30.821 EAL: Detected lcore 23 as core 13 on socket 1 00:05:30.821 EAL: Detected lcore 24 as core 0 on socket 0 00:05:30.821 EAL: Detected lcore 25 as core 1 on socket 0 00:05:30.821 EAL: Detected lcore 26 as core 2 on socket 0 00:05:30.821 EAL: Detected lcore 27 as core 3 on socket 0 00:05:30.821 EAL: Detected lcore 28 as core 4 on socket 0 00:05:30.821 EAL: Detected lcore 29 as core 5 on socket 0 00:05:30.821 EAL: Detected lcore 30 as core 8 on socket 0 00:05:30.821 EAL: Detected lcore 31 as core 9 on socket 0 00:05:30.821 EAL: Detected lcore 32 as core 10 on socket 0 00:05:30.821 EAL: Detected lcore 33 as core 11 on socket 0 00:05:30.821 EAL: Detected lcore 34 as core 12 on socket 0 00:05:30.821 EAL: Detected lcore 35 as core 13 on socket 0 00:05:30.821 EAL: Detected lcore 36 as core 0 on socket 1 00:05:30.821 EAL: Detected lcore 37 as core 1 on socket 1 00:05:30.821 EAL: Detected lcore 38 as core 2 on socket 1 00:05:30.821 EAL: Detected lcore 39 as core 3 on socket 1 00:05:30.821 EAL: Detected lcore 40 as core 4 on socket 1 00:05:30.821 EAL: Detected lcore 41 as core 5 on socket 1 00:05:30.821 EAL: Detected lcore 42 as core 8 on socket 1 00:05:30.821 EAL: Detected lcore 43 as core 9 on socket 1 00:05:30.821 EAL: Detected lcore 44 as core 10 on socket 1 00:05:30.821 EAL: Detected lcore 45 as core 11 on socket 1 00:05:30.821 EAL: Detected lcore 46 as core 12 on socket 1 00:05:30.821 EAL: Detected lcore 47 as core 13 on socket 1 00:05:30.821 EAL: Maximum logical cores by configuration: 128 00:05:30.821 EAL: Detected CPU lcores: 48 00:05:30.821 EAL: Detected NUMA nodes: 2 00:05:30.821 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:30.821 EAL: Detected shared linkage of DPDK 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:30.821 EAL: Registered [vdev] bus. 00:05:30.821 EAL: bus.vdev log level changed from disabled to notice 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:30.821 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:30.821 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:30.821 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:30.821 EAL: No shared files mode enabled, IPC will be disabled 00:05:30.821 EAL: No shared files mode enabled, IPC is disabled 00:05:30.821 EAL: Bus pci wants IOVA as 'DC' 00:05:30.821 EAL: Bus vdev wants IOVA as 'DC' 00:05:30.821 EAL: Buses did not request a specific IOVA mode. 00:05:30.821 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:30.821 EAL: Selected IOVA mode 'VA' 00:05:30.821 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.822 EAL: Probing VFIO support... 00:05:30.822 EAL: IOMMU type 1 (Type 1) is supported 00:05:30.822 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:30.822 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:30.822 EAL: VFIO support initialized 00:05:30.822 EAL: Ask a virtual area of 0x2e000 bytes 00:05:30.822 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:30.822 EAL: Setting up physically contiguous memory... 00:05:30.822 EAL: Setting maximum number of open files to 524288 00:05:30.822 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:30.822 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:30.822 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:30.822 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:30.822 EAL: Ask a virtual area of 0x61000 bytes 00:05:30.822 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:30.822 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:30.822 EAL: Ask a virtual area of 0x400000000 bytes 00:05:30.822 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:30.822 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:30.822 EAL: Hugepages will be freed exactly as allocated. 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: TSC frequency is ~2700000 KHz 00:05:30.822 EAL: Main lcore 0 is ready (tid=7fae9caa4a00;cpuset=[0]) 00:05:30.822 EAL: Trying to obtain current memory policy. 00:05:30.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.822 EAL: Restoring previous memory policy: 0 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was expanded by 2MB 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:30.822 EAL: Mem event callback 'spdk:(nil)' registered 00:05:30.822 00:05:30.822 00:05:30.822 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.822 http://cunit.sourceforge.net/ 00:05:30.822 00:05:30.822 00:05:30.822 Suite: components_suite 00:05:30.822 Test: vtophys_malloc_test ...passed 00:05:30.822 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:30.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.822 EAL: Restoring previous memory policy: 4 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was expanded by 4MB 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was shrunk by 4MB 00:05:30.822 EAL: Trying to obtain current memory policy. 00:05:30.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.822 EAL: Restoring previous memory policy: 4 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was expanded by 6MB 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was shrunk by 6MB 00:05:30.822 EAL: Trying to obtain current memory policy. 00:05:30.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.822 EAL: Restoring previous memory policy: 4 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was expanded by 10MB 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was shrunk by 10MB 00:05:30.822 EAL: Trying to obtain current memory policy. 00:05:30.822 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:30.822 EAL: Restoring previous memory policy: 4 00:05:30.822 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.822 EAL: request: mp_malloc_sync 00:05:30.822 EAL: No shared files mode enabled, IPC is disabled 00:05:30.822 EAL: Heap on socket 0 was expanded by 18MB 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was shrunk by 18MB 00:05:31.082 EAL: Trying to obtain current memory policy. 00:05:31.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.082 EAL: Restoring previous memory policy: 4 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was expanded by 34MB 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was shrunk by 34MB 00:05:31.082 EAL: Trying to obtain current memory policy. 00:05:31.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.082 EAL: Restoring previous memory policy: 4 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was expanded by 66MB 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was shrunk by 66MB 00:05:31.082 EAL: Trying to obtain current memory policy. 00:05:31.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.082 EAL: Restoring previous memory policy: 4 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was expanded by 130MB 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was shrunk by 130MB 00:05:31.082 EAL: Trying to obtain current memory policy. 00:05:31.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.082 EAL: Restoring previous memory policy: 4 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.082 EAL: request: mp_malloc_sync 00:05:31.082 EAL: No shared files mode enabled, IPC is disabled 00:05:31.082 EAL: Heap on socket 0 was expanded by 258MB 00:05:31.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.340 EAL: request: mp_malloc_sync 00:05:31.340 EAL: No shared files mode enabled, IPC is disabled 00:05:31.340 EAL: Heap on socket 0 was shrunk by 258MB 00:05:31.340 EAL: Trying to obtain current memory policy. 00:05:31.340 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.340 EAL: Restoring previous memory policy: 4 00:05:31.340 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.340 EAL: request: mp_malloc_sync 00:05:31.340 EAL: No shared files mode enabled, IPC is disabled 00:05:31.340 EAL: Heap on socket 0 was expanded by 514MB 00:05:31.599 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.599 EAL: request: mp_malloc_sync 00:05:31.599 EAL: No shared files mode enabled, IPC is disabled 00:05:31.599 EAL: Heap on socket 0 was shrunk by 514MB 00:05:31.599 EAL: Trying to obtain current memory policy. 00:05:31.599 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:31.858 EAL: Restoring previous memory policy: 4 00:05:31.858 EAL: Calling mem event callback 'spdk:(nil)' 00:05:31.858 EAL: request: mp_malloc_sync 00:05:31.858 EAL: No shared files mode enabled, IPC is disabled 00:05:31.858 EAL: Heap on socket 0 was expanded by 1026MB 00:05:32.152 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.411 EAL: request: mp_malloc_sync 00:05:32.411 EAL: No shared files mode enabled, IPC is disabled 00:05:32.411 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:32.411 passed 00:05:32.411 00:05:32.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.411 suites 1 1 n/a 0 0 00:05:32.411 tests 2 2 2 0 0 00:05:32.411 asserts 497 497 497 0 n/a 00:05:32.411 00:05:32.411 Elapsed time = 1.360 seconds 00:05:32.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:32.411 EAL: request: mp_malloc_sync 00:05:32.411 EAL: No shared files mode enabled, IPC is disabled 00:05:32.411 EAL: Heap on socket 0 was shrunk by 2MB 00:05:32.411 EAL: No shared files mode enabled, IPC is disabled 00:05:32.411 EAL: No shared files mode enabled, IPC is disabled 00:05:32.411 EAL: No shared files mode enabled, IPC is disabled 00:05:32.411 00:05:32.411 real 0m1.477s 00:05:32.411 user 0m0.833s 00:05:32.411 sys 0m0.607s 00:05:32.411 19:53:19 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.411 19:53:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 ************************************ 00:05:32.411 END TEST env_vtophys 00:05:32.411 ************************************ 00:05:32.411 19:53:19 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:32.411 19:53:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.411 19:53:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.411 19:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 ************************************ 00:05:32.411 START TEST env_pci 00:05:32.411 ************************************ 00:05:32.411 19:53:19 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:32.411 00:05:32.411 00:05:32.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:32.411 http://cunit.sourceforge.net/ 00:05:32.411 00:05:32.411 00:05:32.411 Suite: pci 00:05:32.411 Test: pci_hook ...[2024-07-13 19:53:19.929075] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3060975 has claimed it 00:05:32.411 EAL: Cannot find device (10000:00:01.0) 00:05:32.411 EAL: Failed to attach device on primary process 00:05:32.411 passed 00:05:32.411 00:05:32.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:32.411 suites 1 1 n/a 0 0 00:05:32.411 tests 1 1 1 0 0 00:05:32.411 asserts 25 25 25 0 n/a 00:05:32.411 00:05:32.411 Elapsed time = 0.022 seconds 00:05:32.411 00:05:32.411 real 0m0.033s 00:05:32.411 user 0m0.010s 00:05:32.411 sys 0m0.023s 00:05:32.411 19:53:19 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.411 19:53:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 ************************************ 00:05:32.411 END TEST env_pci 00:05:32.411 ************************************ 00:05:32.411 19:53:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:32.411 19:53:19 env -- env/env.sh@15 -- # uname 00:05:32.411 19:53:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:32.411 19:53:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:32.411 19:53:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.411 19:53:19 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:05:32.411 19:53:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.411 19:53:19 env -- common/autotest_common.sh@10 -- # set +x 00:05:32.411 ************************************ 00:05:32.411 START TEST env_dpdk_post_init 00:05:32.411 ************************************ 00:05:32.411 19:53:19 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:32.411 EAL: Detected CPU lcores: 48 00:05:32.411 EAL: Detected NUMA nodes: 2 00:05:32.411 EAL: Detected shared linkage of DPDK 00:05:32.411 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:32.411 EAL: Selected IOVA mode 'VA' 00:05:32.411 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.411 EAL: VFIO support initialized 00:05:32.411 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:32.670 EAL: Using IOMMU type 1 (Type 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:32.670 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:33.604 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:36.879 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:36.879 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:36.879 Starting DPDK initialization... 00:05:36.879 Starting SPDK post initialization... 00:05:36.879 SPDK NVMe probe 00:05:36.879 Attaching to 0000:88:00.0 00:05:36.879 Attached to 0000:88:00.0 00:05:36.879 Cleaning up... 00:05:36.879 00:05:36.879 real 0m4.441s 00:05:36.879 user 0m3.320s 00:05:36.879 sys 0m0.178s 00:05:36.879 19:53:24 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.879 19:53:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.879 ************************************ 00:05:36.879 END TEST env_dpdk_post_init 00:05:36.879 ************************************ 00:05:36.879 19:53:24 env -- env/env.sh@26 -- # uname 00:05:36.879 19:53:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:36.879 19:53:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.879 19:53:24 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:36.879 19:53:24 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.879 19:53:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.879 ************************************ 00:05:36.879 START TEST env_mem_callbacks 00:05:36.879 ************************************ 00:05:36.879 19:53:24 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:36.879 EAL: Detected CPU lcores: 48 00:05:36.879 EAL: Detected NUMA nodes: 2 00:05:36.879 EAL: Detected shared linkage of DPDK 00:05:36.879 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:36.879 EAL: Selected IOVA mode 'VA' 00:05:36.879 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.879 EAL: VFIO support initialized 00:05:36.879 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:36.879 00:05:36.879 00:05:36.880 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.880 http://cunit.sourceforge.net/ 00:05:36.880 00:05:36.880 00:05:36.880 Suite: memory 00:05:36.880 Test: test ... 00:05:36.880 register 0x200000200000 2097152 00:05:36.880 malloc 3145728 00:05:36.880 register 0x200000400000 4194304 00:05:36.880 buf 0x200000500000 len 3145728 PASSED 00:05:36.880 malloc 64 00:05:36.880 buf 0x2000004fff40 len 64 PASSED 00:05:36.880 malloc 4194304 00:05:36.880 register 0x200000800000 6291456 00:05:36.880 buf 0x200000a00000 len 4194304 PASSED 00:05:36.880 free 0x200000500000 3145728 00:05:36.880 free 0x2000004fff40 64 00:05:36.880 unregister 0x200000400000 4194304 PASSED 00:05:36.880 free 0x200000a00000 4194304 00:05:36.880 unregister 0x200000800000 6291456 PASSED 00:05:36.880 malloc 8388608 00:05:36.880 register 0x200000400000 10485760 00:05:36.880 buf 0x200000600000 len 8388608 PASSED 00:05:36.880 free 0x200000600000 8388608 00:05:36.880 unregister 0x200000400000 10485760 PASSED 00:05:37.137 passed 00:05:37.137 00:05:37.137 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.137 suites 1 1 n/a 0 0 00:05:37.137 tests 1 1 1 0 0 00:05:37.137 asserts 15 15 15 0 n/a 00:05:37.137 00:05:37.137 Elapsed time = 0.006 seconds 00:05:37.137 00:05:37.137 real 0m0.048s 00:05:37.137 user 0m0.015s 00:05:37.137 sys 0m0.033s 00:05:37.137 19:53:24 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.137 19:53:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:37.137 ************************************ 00:05:37.137 END TEST env_mem_callbacks 00:05:37.137 ************************************ 00:05:37.137 00:05:37.137 real 0m6.429s 00:05:37.137 user 0m4.433s 00:05:37.137 sys 0m1.032s 00:05:37.137 19:53:24 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.137 19:53:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:37.137 ************************************ 00:05:37.137 END TEST env 00:05:37.137 ************************************ 00:05:37.137 19:53:24 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.137 19:53:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.137 19:53:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.137 19:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:37.137 ************************************ 00:05:37.137 START TEST rpc 00:05:37.137 ************************************ 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:37.137 * Looking for test storage... 00:05:37.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.137 19:53:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3061632 00:05:37.137 19:53:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:37.137 19:53:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.137 19:53:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3061632 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@827 -- # '[' -z 3061632 ']' 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.137 19:53:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.137 [2024-07-13 19:53:24.709756] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:37.137 [2024-07-13 19:53:24.709833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061632 ] 00:05:37.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.137 [2024-07-13 19:53:24.770189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.394 [2024-07-13 19:53:24.859738] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:37.394 [2024-07-13 19:53:24.859791] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3061632' to capture a snapshot of events at runtime. 00:05:37.394 [2024-07-13 19:53:24.859814] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:37.394 [2024-07-13 19:53:24.859825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:37.394 [2024-07-13 19:53:24.859836] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3061632 for offline analysis/debug. 00:05:37.394 [2024-07-13 19:53:24.859894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.651 19:53:25 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.651 19:53:25 rpc -- common/autotest_common.sh@860 -- # return 0 00:05:37.652 19:53:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.652 19:53:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:37.652 19:53:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:37.652 19:53:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:37.652 19:53:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.652 19:53:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.652 19:53:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 ************************************ 00:05:37.652 START TEST rpc_integrity 00:05:37.652 ************************************ 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:37.652 { 00:05:37.652 "name": "Malloc0", 00:05:37.652 "aliases": [ 00:05:37.652 "270a7e69-1eda-4955-b2a2-80cd3d74e9ea" 00:05:37.652 ], 00:05:37.652 "product_name": "Malloc disk", 00:05:37.652 "block_size": 512, 00:05:37.652 "num_blocks": 16384, 00:05:37.652 "uuid": "270a7e69-1eda-4955-b2a2-80cd3d74e9ea", 00:05:37.652 "assigned_rate_limits": { 00:05:37.652 "rw_ios_per_sec": 0, 00:05:37.652 "rw_mbytes_per_sec": 0, 00:05:37.652 "r_mbytes_per_sec": 0, 00:05:37.652 "w_mbytes_per_sec": 0 00:05:37.652 }, 00:05:37.652 "claimed": false, 00:05:37.652 "zoned": false, 00:05:37.652 "supported_io_types": { 00:05:37.652 "read": true, 00:05:37.652 "write": true, 00:05:37.652 "unmap": true, 00:05:37.652 "write_zeroes": true, 00:05:37.652 "flush": true, 00:05:37.652 "reset": true, 00:05:37.652 "compare": false, 00:05:37.652 "compare_and_write": false, 00:05:37.652 "abort": true, 00:05:37.652 "nvme_admin": false, 00:05:37.652 "nvme_io": false 00:05:37.652 }, 00:05:37.652 "memory_domains": [ 00:05:37.652 { 00:05:37.652 "dma_device_id": "system", 00:05:37.652 "dma_device_type": 1 00:05:37.652 }, 00:05:37.652 { 00:05:37.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.652 "dma_device_type": 2 00:05:37.652 } 00:05:37.652 ], 00:05:37.652 "driver_specific": {} 00:05:37.652 } 00:05:37.652 ]' 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 [2024-07-13 19:53:25.242870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:37.652 [2024-07-13 19:53:25.242936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:37.652 [2024-07-13 19:53:25.242961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11f28f0 00:05:37.652 [2024-07-13 19:53:25.242975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:37.652 [2024-07-13 19:53:25.244396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:37.652 [2024-07-13 19:53:25.244425] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:37.652 Passthru0 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:37.652 { 00:05:37.652 "name": "Malloc0", 00:05:37.652 "aliases": [ 00:05:37.652 "270a7e69-1eda-4955-b2a2-80cd3d74e9ea" 00:05:37.652 ], 00:05:37.652 "product_name": "Malloc disk", 00:05:37.652 "block_size": 512, 00:05:37.652 "num_blocks": 16384, 00:05:37.652 "uuid": "270a7e69-1eda-4955-b2a2-80cd3d74e9ea", 00:05:37.652 "assigned_rate_limits": { 00:05:37.652 "rw_ios_per_sec": 0, 00:05:37.652 "rw_mbytes_per_sec": 0, 00:05:37.652 "r_mbytes_per_sec": 0, 00:05:37.652 "w_mbytes_per_sec": 0 00:05:37.652 }, 00:05:37.652 "claimed": true, 00:05:37.652 "claim_type": "exclusive_write", 00:05:37.652 "zoned": false, 00:05:37.652 "supported_io_types": { 00:05:37.652 "read": true, 00:05:37.652 "write": true, 00:05:37.652 "unmap": true, 00:05:37.652 "write_zeroes": true, 00:05:37.652 "flush": true, 00:05:37.652 "reset": true, 00:05:37.652 "compare": false, 00:05:37.652 "compare_and_write": false, 00:05:37.652 "abort": true, 00:05:37.652 "nvme_admin": false, 00:05:37.652 "nvme_io": false 00:05:37.652 }, 00:05:37.652 "memory_domains": [ 00:05:37.652 { 00:05:37.652 "dma_device_id": "system", 00:05:37.652 "dma_device_type": 1 00:05:37.652 }, 00:05:37.652 { 00:05:37.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.652 "dma_device_type": 2 00:05:37.652 } 00:05:37.652 ], 00:05:37.652 "driver_specific": {} 00:05:37.652 }, 00:05:37.652 { 00:05:37.652 "name": "Passthru0", 00:05:37.652 "aliases": [ 00:05:37.652 "e8a53462-e6c9-50c4-b037-cbc2b9eaaea6" 00:05:37.652 ], 00:05:37.652 "product_name": "passthru", 00:05:37.652 "block_size": 512, 00:05:37.652 "num_blocks": 16384, 00:05:37.652 "uuid": "e8a53462-e6c9-50c4-b037-cbc2b9eaaea6", 00:05:37.652 "assigned_rate_limits": { 00:05:37.652 "rw_ios_per_sec": 0, 00:05:37.652 "rw_mbytes_per_sec": 0, 00:05:37.652 "r_mbytes_per_sec": 0, 00:05:37.652 "w_mbytes_per_sec": 0 00:05:37.652 }, 00:05:37.652 "claimed": false, 00:05:37.652 "zoned": false, 00:05:37.652 "supported_io_types": { 00:05:37.652 "read": true, 00:05:37.652 "write": true, 00:05:37.652 "unmap": true, 00:05:37.652 "write_zeroes": true, 00:05:37.652 "flush": true, 00:05:37.652 "reset": true, 00:05:37.652 "compare": false, 00:05:37.652 "compare_and_write": false, 00:05:37.652 "abort": true, 00:05:37.652 "nvme_admin": false, 00:05:37.652 "nvme_io": false 00:05:37.652 }, 00:05:37.652 "memory_domains": [ 00:05:37.652 { 00:05:37.652 "dma_device_id": "system", 00:05:37.652 "dma_device_type": 1 00:05:37.652 }, 00:05:37.652 { 00:05:37.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.652 "dma_device_type": 2 00:05:37.652 } 00:05:37.652 ], 00:05:37.652 "driver_specific": { 00:05:37.652 "passthru": { 00:05:37.652 "name": "Passthru0", 00:05:37.652 "base_bdev_name": "Malloc0" 00:05:37.652 } 00:05:37.652 } 00:05:37.652 } 00:05:37.652 ]' 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.652 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.652 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.909 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.909 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:37.909 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.909 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.909 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.909 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:37.909 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:37.909 19:53:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:37.909 00:05:37.909 real 0m0.229s 00:05:37.909 user 0m0.154s 00:05:37.909 sys 0m0.017s 00:05:37.909 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.909 19:53:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:37.909 ************************************ 00:05:37.909 END TEST rpc_integrity 00:05:37.909 ************************************ 00:05:37.909 19:53:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:37.909 19:53:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.909 19:53:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.909 19:53:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.909 ************************************ 00:05:37.909 START TEST rpc_plugins 00:05:37.909 ************************************ 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:05:37.909 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.909 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:37.909 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.909 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.909 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:37.909 { 00:05:37.909 "name": "Malloc1", 00:05:37.909 "aliases": [ 00:05:37.909 "dd1256a2-602e-4bb6-8cb6-41c2abd58ac7" 00:05:37.909 ], 00:05:37.909 "product_name": "Malloc disk", 00:05:37.909 "block_size": 4096, 00:05:37.909 "num_blocks": 256, 00:05:37.909 "uuid": "dd1256a2-602e-4bb6-8cb6-41c2abd58ac7", 00:05:37.909 "assigned_rate_limits": { 00:05:37.909 "rw_ios_per_sec": 0, 00:05:37.909 "rw_mbytes_per_sec": 0, 00:05:37.909 "r_mbytes_per_sec": 0, 00:05:37.909 "w_mbytes_per_sec": 0 00:05:37.909 }, 00:05:37.909 "claimed": false, 00:05:37.909 "zoned": false, 00:05:37.909 "supported_io_types": { 00:05:37.909 "read": true, 00:05:37.909 "write": true, 00:05:37.909 "unmap": true, 00:05:37.909 "write_zeroes": true, 00:05:37.909 "flush": true, 00:05:37.909 "reset": true, 00:05:37.909 "compare": false, 00:05:37.909 "compare_and_write": false, 00:05:37.909 "abort": true, 00:05:37.909 "nvme_admin": false, 00:05:37.909 "nvme_io": false 00:05:37.909 }, 00:05:37.909 "memory_domains": [ 00:05:37.909 { 00:05:37.909 "dma_device_id": "system", 00:05:37.909 "dma_device_type": 1 00:05:37.910 }, 00:05:37.910 { 00:05:37.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:37.910 "dma_device_type": 2 00:05:37.910 } 00:05:37.910 ], 00:05:37.910 "driver_specific": {} 00:05:37.910 } 00:05:37.910 ]' 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:37.910 19:53:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:37.910 00:05:37.910 real 0m0.111s 00:05:37.910 user 0m0.076s 00:05:37.910 sys 0m0.008s 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.910 19:53:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:37.910 ************************************ 00:05:37.910 END TEST rpc_plugins 00:05:37.910 ************************************ 00:05:37.910 19:53:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:37.910 19:53:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:37.910 19:53:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.910 19:53:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.166 ************************************ 00:05:38.166 START TEST rpc_trace_cmd_test 00:05:38.166 ************************************ 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:38.166 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3061632", 00:05:38.166 "tpoint_group_mask": "0x8", 00:05:38.166 "iscsi_conn": { 00:05:38.166 "mask": "0x2", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "scsi": { 00:05:38.166 "mask": "0x4", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "bdev": { 00:05:38.166 "mask": "0x8", 00:05:38.166 "tpoint_mask": "0xffffffffffffffff" 00:05:38.166 }, 00:05:38.166 "nvmf_rdma": { 00:05:38.166 "mask": "0x10", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "nvmf_tcp": { 00:05:38.166 "mask": "0x20", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "ftl": { 00:05:38.166 "mask": "0x40", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "blobfs": { 00:05:38.166 "mask": "0x80", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "dsa": { 00:05:38.166 "mask": "0x200", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "thread": { 00:05:38.166 "mask": "0x400", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "nvme_pcie": { 00:05:38.166 "mask": "0x800", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "iaa": { 00:05:38.166 "mask": "0x1000", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "nvme_tcp": { 00:05:38.166 "mask": "0x2000", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "bdev_nvme": { 00:05:38.166 "mask": "0x4000", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 }, 00:05:38.166 "sock": { 00:05:38.166 "mask": "0x8000", 00:05:38.166 "tpoint_mask": "0x0" 00:05:38.166 } 00:05:38.166 }' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:38.166 00:05:38.166 real 0m0.197s 00:05:38.166 user 0m0.170s 00:05:38.166 sys 0m0.018s 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.166 19:53:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:38.166 ************************************ 00:05:38.166 END TEST rpc_trace_cmd_test 00:05:38.166 ************************************ 00:05:38.166 19:53:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:38.166 19:53:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:38.166 19:53:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:38.166 19:53:25 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.166 19:53:25 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.166 19:53:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.166 ************************************ 00:05:38.166 START TEST rpc_daemon_integrity 00:05:38.166 ************************************ 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:38.166 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:38.424 { 00:05:38.424 "name": "Malloc2", 00:05:38.424 "aliases": [ 00:05:38.424 "77b50189-eefb-42f3-9805-af3b73ead30a" 00:05:38.424 ], 00:05:38.424 "product_name": "Malloc disk", 00:05:38.424 "block_size": 512, 00:05:38.424 "num_blocks": 16384, 00:05:38.424 "uuid": "77b50189-eefb-42f3-9805-af3b73ead30a", 00:05:38.424 "assigned_rate_limits": { 00:05:38.424 "rw_ios_per_sec": 0, 00:05:38.424 "rw_mbytes_per_sec": 0, 00:05:38.424 "r_mbytes_per_sec": 0, 00:05:38.424 "w_mbytes_per_sec": 0 00:05:38.424 }, 00:05:38.424 "claimed": false, 00:05:38.424 "zoned": false, 00:05:38.424 "supported_io_types": { 00:05:38.424 "read": true, 00:05:38.424 "write": true, 00:05:38.424 "unmap": true, 00:05:38.424 "write_zeroes": true, 00:05:38.424 "flush": true, 00:05:38.424 "reset": true, 00:05:38.424 "compare": false, 00:05:38.424 "compare_and_write": false, 00:05:38.424 "abort": true, 00:05:38.424 "nvme_admin": false, 00:05:38.424 "nvme_io": false 00:05:38.424 }, 00:05:38.424 "memory_domains": [ 00:05:38.424 { 00:05:38.424 "dma_device_id": "system", 00:05:38.424 "dma_device_type": 1 00:05:38.424 }, 00:05:38.424 { 00:05:38.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.424 "dma_device_type": 2 00:05:38.424 } 00:05:38.424 ], 00:05:38.424 "driver_specific": {} 00:05:38.424 } 00:05:38.424 ]' 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:38.424 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.425 [2024-07-13 19:53:25.916774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:38.425 [2024-07-13 19:53:25.916817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:38.425 [2024-07-13 19:53:25.916842] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ed600 00:05:38.425 [2024-07-13 19:53:25.916858] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:38.425 [2024-07-13 19:53:25.918331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:38.425 [2024-07-13 19:53:25.918360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:38.425 Passthru0 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:38.425 { 00:05:38.425 "name": "Malloc2", 00:05:38.425 "aliases": [ 00:05:38.425 "77b50189-eefb-42f3-9805-af3b73ead30a" 00:05:38.425 ], 00:05:38.425 "product_name": "Malloc disk", 00:05:38.425 "block_size": 512, 00:05:38.425 "num_blocks": 16384, 00:05:38.425 "uuid": "77b50189-eefb-42f3-9805-af3b73ead30a", 00:05:38.425 "assigned_rate_limits": { 00:05:38.425 "rw_ios_per_sec": 0, 00:05:38.425 "rw_mbytes_per_sec": 0, 00:05:38.425 "r_mbytes_per_sec": 0, 00:05:38.425 "w_mbytes_per_sec": 0 00:05:38.425 }, 00:05:38.425 "claimed": true, 00:05:38.425 "claim_type": "exclusive_write", 00:05:38.425 "zoned": false, 00:05:38.425 "supported_io_types": { 00:05:38.425 "read": true, 00:05:38.425 "write": true, 00:05:38.425 "unmap": true, 00:05:38.425 "write_zeroes": true, 00:05:38.425 "flush": true, 00:05:38.425 "reset": true, 00:05:38.425 "compare": false, 00:05:38.425 "compare_and_write": false, 00:05:38.425 "abort": true, 00:05:38.425 "nvme_admin": false, 00:05:38.425 "nvme_io": false 00:05:38.425 }, 00:05:38.425 "memory_domains": [ 00:05:38.425 { 00:05:38.425 "dma_device_id": "system", 00:05:38.425 "dma_device_type": 1 00:05:38.425 }, 00:05:38.425 { 00:05:38.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.425 "dma_device_type": 2 00:05:38.425 } 00:05:38.425 ], 00:05:38.425 "driver_specific": {} 00:05:38.425 }, 00:05:38.425 { 00:05:38.425 "name": "Passthru0", 00:05:38.425 "aliases": [ 00:05:38.425 "3a0bd36b-3227-55b7-a6df-0da88ae167d8" 00:05:38.425 ], 00:05:38.425 "product_name": "passthru", 00:05:38.425 "block_size": 512, 00:05:38.425 "num_blocks": 16384, 00:05:38.425 "uuid": "3a0bd36b-3227-55b7-a6df-0da88ae167d8", 00:05:38.425 "assigned_rate_limits": { 00:05:38.425 "rw_ios_per_sec": 0, 00:05:38.425 "rw_mbytes_per_sec": 0, 00:05:38.425 "r_mbytes_per_sec": 0, 00:05:38.425 "w_mbytes_per_sec": 0 00:05:38.425 }, 00:05:38.425 "claimed": false, 00:05:38.425 "zoned": false, 00:05:38.425 "supported_io_types": { 00:05:38.425 "read": true, 00:05:38.425 "write": true, 00:05:38.425 "unmap": true, 00:05:38.425 "write_zeroes": true, 00:05:38.425 "flush": true, 00:05:38.425 "reset": true, 00:05:38.425 "compare": false, 00:05:38.425 "compare_and_write": false, 00:05:38.425 "abort": true, 00:05:38.425 "nvme_admin": false, 00:05:38.425 "nvme_io": false 00:05:38.425 }, 00:05:38.425 "memory_domains": [ 00:05:38.425 { 00:05:38.425 "dma_device_id": "system", 00:05:38.425 "dma_device_type": 1 00:05:38.425 }, 00:05:38.425 { 00:05:38.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:38.425 "dma_device_type": 2 00:05:38.425 } 00:05:38.425 ], 00:05:38.425 "driver_specific": { 00:05:38.425 "passthru": { 00:05:38.425 "name": "Passthru0", 00:05:38.425 "base_bdev_name": "Malloc2" 00:05:38.425 } 00:05:38.425 } 00:05:38.425 } 00:05:38.425 ]' 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:38.425 19:53:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:38.425 19:53:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:38.425 00:05:38.425 real 0m0.222s 00:05:38.425 user 0m0.146s 00:05:38.425 sys 0m0.024s 00:05:38.425 19:53:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.425 19:53:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:38.425 ************************************ 00:05:38.425 END TEST rpc_daemon_integrity 00:05:38.425 ************************************ 00:05:38.425 19:53:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:38.425 19:53:26 rpc -- rpc/rpc.sh@84 -- # killprocess 3061632 00:05:38.425 19:53:26 rpc -- common/autotest_common.sh@946 -- # '[' -z 3061632 ']' 00:05:38.425 19:53:26 rpc -- common/autotest_common.sh@950 -- # kill -0 3061632 00:05:38.425 19:53:26 rpc -- common/autotest_common.sh@951 -- # uname 00:05:38.425 19:53:26 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:38.425 19:53:26 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3061632 00:05:38.684 19:53:26 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:38.684 19:53:26 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:38.684 19:53:26 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3061632' 00:05:38.684 killing process with pid 3061632 00:05:38.684 19:53:26 rpc -- common/autotest_common.sh@965 -- # kill 3061632 00:05:38.684 19:53:26 rpc -- common/autotest_common.sh@970 -- # wait 3061632 00:05:38.942 00:05:38.942 real 0m1.871s 00:05:38.942 user 0m2.362s 00:05:38.942 sys 0m0.594s 00:05:38.942 19:53:26 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.942 19:53:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.942 ************************************ 00:05:38.942 END TEST rpc 00:05:38.942 ************************************ 00:05:38.942 19:53:26 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:38.942 19:53:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.942 19:53:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.942 19:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:38.942 ************************************ 00:05:38.942 START TEST skip_rpc 00:05:38.942 ************************************ 00:05:38.942 19:53:26 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:38.942 * Looking for test storage... 00:05:38.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:38.942 19:53:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:38.942 19:53:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:38.942 19:53:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:38.942 19:53:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.942 19:53:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.942 19:53:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.200 ************************************ 00:05:39.200 START TEST skip_rpc 00:05:39.200 ************************************ 00:05:39.200 19:53:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:05:39.200 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3062069 00:05:39.200 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:39.200 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.200 19:53:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:39.200 [2024-07-13 19:53:26.657464] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:39.200 [2024-07-13 19:53:26.657561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062069 ] 00:05:39.200 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.200 [2024-07-13 19:53:26.716936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.200 [2024-07-13 19:53:26.805155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3062069 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3062069 ']' 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3062069 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3062069 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3062069' 00:05:44.458 killing process with pid 3062069 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3062069 00:05:44.458 19:53:31 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3062069 00:05:44.458 00:05:44.458 real 0m5.417s 00:05:44.458 user 0m5.101s 00:05:44.458 sys 0m0.319s 00:05:44.458 19:53:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.458 19:53:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.458 ************************************ 00:05:44.458 END TEST skip_rpc 00:05:44.458 ************************************ 00:05:44.458 19:53:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:44.458 19:53:32 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:44.458 19:53:32 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.458 19:53:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.458 ************************************ 00:05:44.458 START TEST skip_rpc_with_json 00:05:44.458 ************************************ 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3062770 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3062770 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3062770 ']' 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:44.458 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.716 [2024-07-13 19:53:32.122035] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:44.716 [2024-07-13 19:53:32.122113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3062770 ] 00:05:44.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.716 [2024-07-13 19:53:32.180556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.716 [2024-07-13 19:53:32.268124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.974 [2024-07-13 19:53:32.522259] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:44.974 request: 00:05:44.974 { 00:05:44.974 "trtype": "tcp", 00:05:44.974 "method": "nvmf_get_transports", 00:05:44.974 "req_id": 1 00:05:44.974 } 00:05:44.974 Got JSON-RPC error response 00:05:44.974 response: 00:05:44.974 { 00:05:44.974 "code": -19, 00:05:44.974 "message": "No such device" 00:05:44.974 } 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.974 [2024-07-13 19:53:32.530388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:44.974 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:45.233 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.233 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:45.233 { 00:05:45.233 "subsystems": [ 00:05:45.233 { 00:05:45.233 "subsystem": "vfio_user_target", 00:05:45.233 "config": null 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "subsystem": "keyring", 00:05:45.233 "config": [] 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "subsystem": "iobuf", 00:05:45.233 "config": [ 00:05:45.233 { 00:05:45.233 "method": "iobuf_set_options", 00:05:45.233 "params": { 00:05:45.233 "small_pool_count": 8192, 00:05:45.233 "large_pool_count": 1024, 00:05:45.233 "small_bufsize": 8192, 00:05:45.233 "large_bufsize": 135168 00:05:45.233 } 00:05:45.233 } 00:05:45.233 ] 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "subsystem": "sock", 00:05:45.233 "config": [ 00:05:45.233 { 00:05:45.233 "method": "sock_set_default_impl", 00:05:45.233 "params": { 00:05:45.233 "impl_name": "posix" 00:05:45.233 } 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "method": "sock_impl_set_options", 00:05:45.233 "params": { 00:05:45.233 "impl_name": "ssl", 00:05:45.233 "recv_buf_size": 4096, 00:05:45.233 "send_buf_size": 4096, 00:05:45.233 "enable_recv_pipe": true, 00:05:45.233 "enable_quickack": false, 00:05:45.233 "enable_placement_id": 0, 00:05:45.233 "enable_zerocopy_send_server": true, 00:05:45.233 "enable_zerocopy_send_client": false, 00:05:45.233 "zerocopy_threshold": 0, 00:05:45.233 "tls_version": 0, 00:05:45.233 "enable_ktls": false 00:05:45.233 } 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "method": "sock_impl_set_options", 00:05:45.233 "params": { 00:05:45.233 "impl_name": "posix", 00:05:45.233 "recv_buf_size": 2097152, 00:05:45.233 "send_buf_size": 2097152, 00:05:45.233 "enable_recv_pipe": true, 00:05:45.233 "enable_quickack": false, 00:05:45.233 "enable_placement_id": 0, 00:05:45.233 "enable_zerocopy_send_server": true, 00:05:45.233 "enable_zerocopy_send_client": false, 00:05:45.233 "zerocopy_threshold": 0, 00:05:45.233 "tls_version": 0, 00:05:45.233 "enable_ktls": false 00:05:45.233 } 00:05:45.233 } 00:05:45.233 ] 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "subsystem": "vmd", 00:05:45.233 "config": [] 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "subsystem": "accel", 00:05:45.233 "config": [ 00:05:45.233 { 00:05:45.233 "method": "accel_set_options", 00:05:45.233 "params": { 00:05:45.233 "small_cache_size": 128, 00:05:45.233 "large_cache_size": 16, 00:05:45.233 "task_count": 2048, 00:05:45.233 "sequence_count": 2048, 00:05:45.233 "buf_count": 2048 00:05:45.233 } 00:05:45.233 } 00:05:45.233 ] 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "subsystem": "bdev", 00:05:45.233 "config": [ 00:05:45.233 { 00:05:45.233 "method": "bdev_set_options", 00:05:45.233 "params": { 00:05:45.233 "bdev_io_pool_size": 65535, 00:05:45.233 "bdev_io_cache_size": 256, 00:05:45.233 "bdev_auto_examine": true, 00:05:45.233 "iobuf_small_cache_size": 128, 00:05:45.233 "iobuf_large_cache_size": 16 00:05:45.233 } 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "method": "bdev_raid_set_options", 00:05:45.233 "params": { 00:05:45.233 "process_window_size_kb": 1024 00:05:45.233 } 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "method": "bdev_iscsi_set_options", 00:05:45.233 "params": { 00:05:45.233 "timeout_sec": 30 00:05:45.233 } 00:05:45.233 }, 00:05:45.233 { 00:05:45.233 "method": "bdev_nvme_set_options", 00:05:45.233 "params": { 00:05:45.233 "action_on_timeout": "none", 00:05:45.233 "timeout_us": 0, 00:05:45.233 "timeout_admin_us": 0, 00:05:45.233 "keep_alive_timeout_ms": 10000, 00:05:45.233 "arbitration_burst": 0, 00:05:45.233 "low_priority_weight": 0, 00:05:45.233 "medium_priority_weight": 0, 00:05:45.233 "high_priority_weight": 0, 00:05:45.233 "nvme_adminq_poll_period_us": 10000, 00:05:45.233 "nvme_ioq_poll_period_us": 0, 00:05:45.233 "io_queue_requests": 0, 00:05:45.233 "delay_cmd_submit": true, 00:05:45.233 "transport_retry_count": 4, 00:05:45.234 "bdev_retry_count": 3, 00:05:45.234 "transport_ack_timeout": 0, 00:05:45.234 "ctrlr_loss_timeout_sec": 0, 00:05:45.234 "reconnect_delay_sec": 0, 00:05:45.234 "fast_io_fail_timeout_sec": 0, 00:05:45.234 "disable_auto_failback": false, 00:05:45.234 "generate_uuids": false, 00:05:45.234 "transport_tos": 0, 00:05:45.234 "nvme_error_stat": false, 00:05:45.234 "rdma_srq_size": 0, 00:05:45.234 "io_path_stat": false, 00:05:45.234 "allow_accel_sequence": false, 00:05:45.234 "rdma_max_cq_size": 0, 00:05:45.234 "rdma_cm_event_timeout_ms": 0, 00:05:45.234 "dhchap_digests": [ 00:05:45.234 "sha256", 00:05:45.234 "sha384", 00:05:45.234 "sha512" 00:05:45.234 ], 00:05:45.234 "dhchap_dhgroups": [ 00:05:45.234 "null", 00:05:45.234 "ffdhe2048", 00:05:45.234 "ffdhe3072", 00:05:45.234 "ffdhe4096", 00:05:45.234 "ffdhe6144", 00:05:45.234 "ffdhe8192" 00:05:45.234 ] 00:05:45.234 } 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "method": "bdev_nvme_set_hotplug", 00:05:45.234 "params": { 00:05:45.234 "period_us": 100000, 00:05:45.234 "enable": false 00:05:45.234 } 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "method": "bdev_wait_for_examine" 00:05:45.234 } 00:05:45.234 ] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "scsi", 00:05:45.234 "config": null 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "scheduler", 00:05:45.234 "config": [ 00:05:45.234 { 00:05:45.234 "method": "framework_set_scheduler", 00:05:45.234 "params": { 00:05:45.234 "name": "static" 00:05:45.234 } 00:05:45.234 } 00:05:45.234 ] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "vhost_scsi", 00:05:45.234 "config": [] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "vhost_blk", 00:05:45.234 "config": [] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "ublk", 00:05:45.234 "config": [] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "nbd", 00:05:45.234 "config": [] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "nvmf", 00:05:45.234 "config": [ 00:05:45.234 { 00:05:45.234 "method": "nvmf_set_config", 00:05:45.234 "params": { 00:05:45.234 "discovery_filter": "match_any", 00:05:45.234 "admin_cmd_passthru": { 00:05:45.234 "identify_ctrlr": false 00:05:45.234 } 00:05:45.234 } 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "method": "nvmf_set_max_subsystems", 00:05:45.234 "params": { 00:05:45.234 "max_subsystems": 1024 00:05:45.234 } 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "method": "nvmf_set_crdt", 00:05:45.234 "params": { 00:05:45.234 "crdt1": 0, 00:05:45.234 "crdt2": 0, 00:05:45.234 "crdt3": 0 00:05:45.234 } 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "method": "nvmf_create_transport", 00:05:45.234 "params": { 00:05:45.234 "trtype": "TCP", 00:05:45.234 "max_queue_depth": 128, 00:05:45.234 "max_io_qpairs_per_ctrlr": 127, 00:05:45.234 "in_capsule_data_size": 4096, 00:05:45.234 "max_io_size": 131072, 00:05:45.234 "io_unit_size": 131072, 00:05:45.234 "max_aq_depth": 128, 00:05:45.234 "num_shared_buffers": 511, 00:05:45.234 "buf_cache_size": 4294967295, 00:05:45.234 "dif_insert_or_strip": false, 00:05:45.234 "zcopy": false, 00:05:45.234 "c2h_success": true, 00:05:45.234 "sock_priority": 0, 00:05:45.234 "abort_timeout_sec": 1, 00:05:45.234 "ack_timeout": 0, 00:05:45.234 "data_wr_pool_size": 0 00:05:45.234 } 00:05:45.234 } 00:05:45.234 ] 00:05:45.234 }, 00:05:45.234 { 00:05:45.234 "subsystem": "iscsi", 00:05:45.234 "config": [ 00:05:45.234 { 00:05:45.234 "method": "iscsi_set_options", 00:05:45.234 "params": { 00:05:45.234 "node_base": "iqn.2016-06.io.spdk", 00:05:45.234 "max_sessions": 128, 00:05:45.234 "max_connections_per_session": 2, 00:05:45.234 "max_queue_depth": 64, 00:05:45.234 "default_time2wait": 2, 00:05:45.234 "default_time2retain": 20, 00:05:45.234 "first_burst_length": 8192, 00:05:45.234 "immediate_data": true, 00:05:45.234 "allow_duplicated_isid": false, 00:05:45.234 "error_recovery_level": 0, 00:05:45.234 "nop_timeout": 60, 00:05:45.234 "nop_in_interval": 30, 00:05:45.234 "disable_chap": false, 00:05:45.234 "require_chap": false, 00:05:45.234 "mutual_chap": false, 00:05:45.234 "chap_group": 0, 00:05:45.234 "max_large_datain_per_connection": 64, 00:05:45.234 "max_r2t_per_connection": 4, 00:05:45.234 "pdu_pool_size": 36864, 00:05:45.234 "immediate_data_pool_size": 16384, 00:05:45.234 "data_out_pool_size": 2048 00:05:45.234 } 00:05:45.234 } 00:05:45.234 ] 00:05:45.234 } 00:05:45.234 ] 00:05:45.234 } 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3062770 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3062770 ']' 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3062770 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3062770 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3062770' 00:05:45.234 killing process with pid 3062770 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3062770 00:05:45.234 19:53:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3062770 00:05:45.491 19:53:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3062910 00:05:45.491 19:53:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:45.491 19:53:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3062910 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3062910 ']' 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3062910 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3062910 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3062910' 00:05:50.750 killing process with pid 3062910 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3062910 00:05:50.750 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3062910 00:05:51.008 19:53:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.008 19:53:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:51.008 00:05:51.008 real 0m6.458s 00:05:51.008 user 0m6.045s 00:05:51.008 sys 0m0.698s 00:05:51.008 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.008 19:53:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.008 ************************************ 00:05:51.008 END TEST skip_rpc_with_json 00:05:51.008 ************************************ 00:05:51.008 19:53:38 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.009 19:53:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.009 19:53:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.009 19:53:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.009 ************************************ 00:05:51.009 START TEST skip_rpc_with_delay 00:05:51.009 ************************************ 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.009 [2024-07-13 19:53:38.633000] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.009 [2024-07-13 19:53:38.633094] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.009 00:05:51.009 real 0m0.066s 00:05:51.009 user 0m0.042s 00:05:51.009 sys 0m0.023s 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.009 19:53:38 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.009 ************************************ 00:05:51.009 END TEST skip_rpc_with_delay 00:05:51.009 ************************************ 00:05:51.267 19:53:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.267 19:53:38 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.267 19:53:38 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.267 19:53:38 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.267 19:53:38 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.267 19:53:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.267 ************************************ 00:05:51.267 START TEST exit_on_failed_rpc_init 00:05:51.267 ************************************ 00:05:51.267 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:05:51.267 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3063628 00:05:51.267 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3063628 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3063628 ']' 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.268 19:53:38 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 [2024-07-13 19:53:38.749156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:51.268 [2024-07-13 19:53:38.749267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063628 ] 00:05:51.268 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.268 [2024-07-13 19:53:38.812231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.268 [2024-07-13 19:53:38.901960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.526 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:51.784 [2024-07-13 19:53:39.218648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:51.784 [2024-07-13 19:53:39.218735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063637 ] 00:05:51.784 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.784 [2024-07-13 19:53:39.278652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.784 [2024-07-13 19:53:39.373069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.784 [2024-07-13 19:53:39.373200] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:51.784 [2024-07-13 19:53:39.373223] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:51.784 [2024-07-13 19:53:39.373237] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3063628 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3063628 ']' 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3063628 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3063628 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3063628' 00:05:52.043 killing process with pid 3063628 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3063628 00:05:52.043 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3063628 00:05:52.302 00:05:52.302 real 0m1.184s 00:05:52.302 user 0m1.261s 00:05:52.302 sys 0m0.464s 00:05:52.302 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.302 19:53:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:52.302 ************************************ 00:05:52.302 END TEST exit_on_failed_rpc_init 00:05:52.302 ************************************ 00:05:52.302 19:53:39 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:52.302 00:05:52.302 real 0m13.378s 00:05:52.302 user 0m12.571s 00:05:52.303 sys 0m1.652s 00:05:52.303 19:53:39 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.303 19:53:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.303 ************************************ 00:05:52.303 END TEST skip_rpc 00:05:52.303 ************************************ 00:05:52.303 19:53:39 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:52.303 19:53:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.303 19:53:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.303 19:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:52.303 ************************************ 00:05:52.303 START TEST rpc_client 00:05:52.303 ************************************ 00:05:52.303 19:53:39 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:52.562 * Looking for test storage... 00:05:52.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:52.562 19:53:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:52.562 OK 00:05:52.562 19:53:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:52.562 00:05:52.562 real 0m0.064s 00:05:52.562 user 0m0.027s 00:05:52.562 sys 0m0.042s 00:05:52.562 19:53:40 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.562 19:53:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:52.562 ************************************ 00:05:52.562 END TEST rpc_client 00:05:52.562 ************************************ 00:05:52.562 19:53:40 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:52.562 19:53:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:52.562 19:53:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.562 19:53:40 -- common/autotest_common.sh@10 -- # set +x 00:05:52.562 ************************************ 00:05:52.562 START TEST json_config 00:05:52.562 ************************************ 00:05:52.562 19:53:40 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.562 19:53:40 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.562 19:53:40 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.562 19:53:40 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.562 19:53:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.562 19:53:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.562 19:53:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.562 19:53:40 json_config -- paths/export.sh@5 -- # export PATH 00:05:52.562 19:53:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@47 -- # : 0 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:52.562 19:53:40 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:52.562 INFO: JSON configuration test init 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:52.562 19:53:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:52.562 19:53:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.562 19:53:40 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:52.562 19:53:40 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:52.562 19:53:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.563 19:53:40 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:52.563 19:53:40 json_config -- json_config/common.sh@9 -- # local app=target 00:05:52.563 19:53:40 json_config -- json_config/common.sh@10 -- # shift 00:05:52.563 19:53:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:52.563 19:53:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:52.563 19:53:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:52.563 19:53:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.563 19:53:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:52.563 19:53:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3063878 00:05:52.563 19:53:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:52.563 19:53:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:52.563 Waiting for target to run... 00:05:52.563 19:53:40 json_config -- json_config/common.sh@25 -- # waitforlisten 3063878 /var/tmp/spdk_tgt.sock 00:05:52.563 19:53:40 json_config -- common/autotest_common.sh@827 -- # '[' -z 3063878 ']' 00:05:52.563 19:53:40 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:52.563 19:53:40 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.563 19:53:40 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:52.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:52.563 19:53:40 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.563 19:53:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.563 [2024-07-13 19:53:40.176749] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:05:52.563 [2024-07-13 19:53:40.176847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3063878 ] 00:05:52.563 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.130 [2024-07-13 19:53:40.520826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.130 [2024-07-13 19:53:40.589237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.725 19:53:41 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.725 19:53:41 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:53.725 19:53:41 json_config -- json_config/common.sh@26 -- # echo '' 00:05:53.725 00:05:53.725 19:53:41 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:53.725 19:53:41 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:53.725 19:53:41 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:53.725 19:53:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.725 19:53:41 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:53.725 19:53:41 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:53.725 19:53:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.725 19:53:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.725 19:53:41 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:53.725 19:53:41 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:53.725 19:53:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:57.006 19:53:44 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:57.007 19:53:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:57.007 19:53:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:57.007 19:53:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:57.007 19:53:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.007 19:53:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:57.007 19:53:44 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:57.007 19:53:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:57.007 19:53:44 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:57.007 19:53:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:57.265 MallocForNvmf0 00:05:57.265 19:53:44 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:57.265 19:53:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:57.529 MallocForNvmf1 00:05:57.529 19:53:45 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:57.529 19:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:57.796 [2024-07-13 19:53:45.277391] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.796 19:53:45 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:57.796 19:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:58.054 19:53:45 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:58.054 19:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:58.311 19:53:45 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:58.311 19:53:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:58.568 19:53:46 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:58.568 19:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:58.827 [2024-07-13 19:53:46.260553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:58.827 19:53:46 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:58.827 19:53:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.827 19:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.827 19:53:46 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:58.827 19:53:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:58.827 19:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.827 19:53:46 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:58.827 19:53:46 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:58.827 19:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.085 MallocBdevForConfigChangeCheck 00:05:59.085 19:53:46 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:59.085 19:53:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.085 19:53:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.085 19:53:46 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:59.085 19:53:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.342 19:53:46 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:59.343 INFO: shutting down applications... 00:05:59.343 19:53:46 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:59.343 19:53:46 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:59.343 19:53:46 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:59.343 19:53:46 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:01.240 Calling clear_iscsi_subsystem 00:06:01.240 Calling clear_nvmf_subsystem 00:06:01.240 Calling clear_nbd_subsystem 00:06:01.240 Calling clear_ublk_subsystem 00:06:01.240 Calling clear_vhost_blk_subsystem 00:06:01.240 Calling clear_vhost_scsi_subsystem 00:06:01.240 Calling clear_bdev_subsystem 00:06:01.240 19:53:48 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:01.240 19:53:48 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:01.240 19:53:48 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:01.240 19:53:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.240 19:53:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:01.240 19:53:48 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:01.499 19:53:48 json_config -- json_config/json_config.sh@345 -- # break 00:06:01.499 19:53:48 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:01.499 19:53:48 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:01.499 19:53:48 json_config -- json_config/common.sh@31 -- # local app=target 00:06:01.499 19:53:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:01.499 19:53:48 json_config -- json_config/common.sh@35 -- # [[ -n 3063878 ]] 00:06:01.499 19:53:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3063878 00:06:01.499 19:53:48 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:01.499 19:53:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.499 19:53:48 json_config -- json_config/common.sh@41 -- # kill -0 3063878 00:06:01.499 19:53:48 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:02.066 19:53:49 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:02.066 19:53:49 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.066 19:53:49 json_config -- json_config/common.sh@41 -- # kill -0 3063878 00:06:02.066 19:53:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:02.066 19:53:49 json_config -- json_config/common.sh@43 -- # break 00:06:02.066 19:53:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:02.066 19:53:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:02.066 SPDK target shutdown done 00:06:02.066 19:53:49 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:02.066 INFO: relaunching applications... 00:06:02.066 19:53:49 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.066 19:53:49 json_config -- json_config/common.sh@9 -- # local app=target 00:06:02.066 19:53:49 json_config -- json_config/common.sh@10 -- # shift 00:06:02.066 19:53:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.066 19:53:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.066 19:53:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.066 19:53:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.066 19:53:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.066 19:53:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3065077 00:06:02.066 19:53:49 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.066 19:53:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.066 Waiting for target to run... 00:06:02.066 19:53:49 json_config -- json_config/common.sh@25 -- # waitforlisten 3065077 /var/tmp/spdk_tgt.sock 00:06:02.066 19:53:49 json_config -- common/autotest_common.sh@827 -- # '[' -z 3065077 ']' 00:06:02.066 19:53:49 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.066 19:53:49 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.066 19:53:49 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.066 19:53:49 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.066 19:53:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.066 [2024-07-13 19:53:49.524777] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:02.066 [2024-07-13 19:53:49.524879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065077 ] 00:06:02.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.634 [2024-07-13 19:53:50.034764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.634 [2024-07-13 19:53:50.116978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.917 [2024-07-13 19:53:53.154588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.917 [2024-07-13 19:53:53.187027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:05.917 19:53:53 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.917 19:53:53 json_config -- common/autotest_common.sh@860 -- # return 0 00:06:05.917 19:53:53 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.917 00:06:05.917 19:53:53 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:05.917 19:53:53 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:05.917 INFO: Checking if target configuration is the same... 00:06:05.917 19:53:53 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.917 19:53:53 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:05.917 19:53:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.917 + '[' 2 -ne 2 ']' 00:06:05.917 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:05.917 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:05.917 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:05.917 +++ basename /dev/fd/62 00:06:05.917 ++ mktemp /tmp/62.XXX 00:06:05.917 + tmp_file_1=/tmp/62.AOf 00:06:05.917 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.917 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:05.917 + tmp_file_2=/tmp/spdk_tgt_config.json.B9u 00:06:05.917 + ret=0 00:06:05.917 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.175 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.176 + diff -u /tmp/62.AOf /tmp/spdk_tgt_config.json.B9u 00:06:06.176 + echo 'INFO: JSON config files are the same' 00:06:06.176 INFO: JSON config files are the same 00:06:06.176 + rm /tmp/62.AOf /tmp/spdk_tgt_config.json.B9u 00:06:06.176 + exit 0 00:06:06.176 19:53:53 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:06.176 19:53:53 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:06.176 INFO: changing configuration and checking if this can be detected... 00:06:06.176 19:53:53 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:06.176 19:53:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:06.433 19:53:53 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.433 19:53:53 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:06.433 19:53:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:06.433 + '[' 2 -ne 2 ']' 00:06:06.433 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:06.433 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:06.433 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:06.433 +++ basename /dev/fd/62 00:06:06.433 ++ mktemp /tmp/62.XXX 00:06:06.433 + tmp_file_1=/tmp/62.BzP 00:06:06.433 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.433 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:06.433 + tmp_file_2=/tmp/spdk_tgt_config.json.n2X 00:06:06.433 + ret=0 00:06:06.433 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.691 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.691 + diff -u /tmp/62.BzP /tmp/spdk_tgt_config.json.n2X 00:06:06.691 + ret=1 00:06:06.691 + echo '=== Start of file: /tmp/62.BzP ===' 00:06:06.691 + cat /tmp/62.BzP 00:06:06.691 + echo '=== End of file: /tmp/62.BzP ===' 00:06:06.691 + echo '' 00:06:06.691 + echo '=== Start of file: /tmp/spdk_tgt_config.json.n2X ===' 00:06:06.691 + cat /tmp/spdk_tgt_config.json.n2X 00:06:06.691 + echo '=== End of file: /tmp/spdk_tgt_config.json.n2X ===' 00:06:06.691 + echo '' 00:06:06.691 + rm /tmp/62.BzP /tmp/spdk_tgt_config.json.n2X 00:06:06.691 + exit 1 00:06:06.691 19:53:54 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:06.691 INFO: configuration change detected. 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:06.949 19:53:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.949 19:53:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@317 -- # [[ -n 3065077 ]] 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:06.949 19:53:54 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:06.949 19:53:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.950 19:53:54 json_config -- json_config/json_config.sh@323 -- # killprocess 3065077 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@946 -- # '[' -z 3065077 ']' 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@950 -- # kill -0 3065077 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@951 -- # uname 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3065077 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3065077' 00:06:06.950 killing process with pid 3065077 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@965 -- # kill 3065077 00:06:06.950 19:53:54 json_config -- common/autotest_common.sh@970 -- # wait 3065077 00:06:08.848 19:53:56 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.848 19:53:56 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:08.848 19:53:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.848 19:53:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.848 19:53:56 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:08.848 19:53:56 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:08.848 INFO: Success 00:06:08.848 00:06:08.848 real 0m16.013s 00:06:08.848 user 0m17.805s 00:06:08.848 sys 0m2.001s 00:06:08.848 19:53:56 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.848 19:53:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.848 ************************************ 00:06:08.848 END TEST json_config 00:06:08.848 ************************************ 00:06:08.848 19:53:56 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.848 19:53:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.848 19:53:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.848 19:53:56 -- common/autotest_common.sh@10 -- # set +x 00:06:08.848 ************************************ 00:06:08.848 START TEST json_config_extra_key 00:06:08.848 ************************************ 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.848 19:53:56 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.848 19:53:56 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.848 19:53:56 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.848 19:53:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.848 19:53:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.848 19:53:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.848 19:53:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:08.848 19:53:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.848 19:53:56 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:08.848 INFO: launching applications... 00:06:08.848 19:53:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3065985 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.848 Waiting for target to run... 00:06:08.848 19:53:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3065985 /var/tmp/spdk_tgt.sock 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3065985 ']' 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:08.848 19:53:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.848 [2024-07-13 19:53:56.226233] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:08.848 [2024-07-13 19:53:56.226313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3065985 ] 00:06:08.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.106 [2024-07-13 19:53:56.577053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.106 [2024-07-13 19:53:56.640801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.672 19:53:57 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:09.672 19:53:57 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:09.672 00:06:09.672 19:53:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:09.672 INFO: shutting down applications... 00:06:09.672 19:53:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3065985 ]] 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3065985 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3065985 00:06:09.672 19:53:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3065985 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.238 19:53:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.238 SPDK target shutdown done 00:06:10.238 19:53:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:10.238 Success 00:06:10.238 00:06:10.238 real 0m1.558s 00:06:10.238 user 0m1.495s 00:06:10.238 sys 0m0.442s 00:06:10.238 19:53:57 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.238 19:53:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.238 ************************************ 00:06:10.238 END TEST json_config_extra_key 00:06:10.238 ************************************ 00:06:10.238 19:53:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.238 19:53:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.238 19:53:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.238 19:53:57 -- common/autotest_common.sh@10 -- # set +x 00:06:10.238 ************************************ 00:06:10.238 START TEST alias_rpc 00:06:10.238 ************************************ 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.238 * Looking for test storage... 00:06:10.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:10.238 19:53:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.238 19:53:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3066290 00:06:10.238 19:53:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.238 19:53:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3066290 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3066290 ']' 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:10.238 19:53:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.238 [2024-07-13 19:53:57.836459] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:10.238 [2024-07-13 19:53:57.836561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066290 ] 00:06:10.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.238 [2024-07-13 19:53:57.893462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.496 [2024-07-13 19:53:57.978677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.754 19:53:58 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.754 19:53:58 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:10.754 19:53:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:11.013 19:53:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3066290 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3066290 ']' 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3066290 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3066290 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3066290' 00:06:11.013 killing process with pid 3066290 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@965 -- # kill 3066290 00:06:11.013 19:53:58 alias_rpc -- common/autotest_common.sh@970 -- # wait 3066290 00:06:11.579 00:06:11.579 real 0m1.204s 00:06:11.579 user 0m1.266s 00:06:11.579 sys 0m0.433s 00:06:11.579 19:53:58 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.579 19:53:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.579 ************************************ 00:06:11.579 END TEST alias_rpc 00:06:11.579 ************************************ 00:06:11.579 19:53:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:11.579 19:53:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.579 19:53:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.579 19:53:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.579 19:53:58 -- common/autotest_common.sh@10 -- # set +x 00:06:11.579 ************************************ 00:06:11.579 START TEST spdkcli_tcp 00:06:11.579 ************************************ 00:06:11.579 19:53:58 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.579 * Looking for test storage... 00:06:11.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3066475 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:11.579 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3066475 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3066475 ']' 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:11.579 19:53:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.579 [2024-07-13 19:53:59.098284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:11.579 [2024-07-13 19:53:59.098367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066475 ] 00:06:11.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.579 [2024-07-13 19:53:59.156184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.886 [2024-07-13 19:53:59.242366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.886 [2024-07-13 19:53:59.242370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.886 19:53:59 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:11.886 19:53:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:11.886 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3066495 00:06:11.886 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:11.886 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:12.157 [ 00:06:12.157 "bdev_malloc_delete", 00:06:12.157 "bdev_malloc_create", 00:06:12.157 "bdev_null_resize", 00:06:12.157 "bdev_null_delete", 00:06:12.157 "bdev_null_create", 00:06:12.157 "bdev_nvme_cuse_unregister", 00:06:12.157 "bdev_nvme_cuse_register", 00:06:12.157 "bdev_opal_new_user", 00:06:12.157 "bdev_opal_set_lock_state", 00:06:12.157 "bdev_opal_delete", 00:06:12.157 "bdev_opal_get_info", 00:06:12.157 "bdev_opal_create", 00:06:12.157 "bdev_nvme_opal_revert", 00:06:12.157 "bdev_nvme_opal_init", 00:06:12.157 "bdev_nvme_send_cmd", 00:06:12.157 "bdev_nvme_get_path_iostat", 00:06:12.157 "bdev_nvme_get_mdns_discovery_info", 00:06:12.157 "bdev_nvme_stop_mdns_discovery", 00:06:12.157 "bdev_nvme_start_mdns_discovery", 00:06:12.157 "bdev_nvme_set_multipath_policy", 00:06:12.157 "bdev_nvme_set_preferred_path", 00:06:12.157 "bdev_nvme_get_io_paths", 00:06:12.157 "bdev_nvme_remove_error_injection", 00:06:12.157 "bdev_nvme_add_error_injection", 00:06:12.157 "bdev_nvme_get_discovery_info", 00:06:12.157 "bdev_nvme_stop_discovery", 00:06:12.157 "bdev_nvme_start_discovery", 00:06:12.157 "bdev_nvme_get_controller_health_info", 00:06:12.157 "bdev_nvme_disable_controller", 00:06:12.157 "bdev_nvme_enable_controller", 00:06:12.157 "bdev_nvme_reset_controller", 00:06:12.157 "bdev_nvme_get_transport_statistics", 00:06:12.157 "bdev_nvme_apply_firmware", 00:06:12.157 "bdev_nvme_detach_controller", 00:06:12.157 "bdev_nvme_get_controllers", 00:06:12.157 "bdev_nvme_attach_controller", 00:06:12.157 "bdev_nvme_set_hotplug", 00:06:12.157 "bdev_nvme_set_options", 00:06:12.157 "bdev_passthru_delete", 00:06:12.157 "bdev_passthru_create", 00:06:12.157 "bdev_lvol_set_parent_bdev", 00:06:12.157 "bdev_lvol_set_parent", 00:06:12.157 "bdev_lvol_check_shallow_copy", 00:06:12.157 "bdev_lvol_start_shallow_copy", 00:06:12.157 "bdev_lvol_grow_lvstore", 00:06:12.157 "bdev_lvol_get_lvols", 00:06:12.157 "bdev_lvol_get_lvstores", 00:06:12.157 "bdev_lvol_delete", 00:06:12.157 "bdev_lvol_set_read_only", 00:06:12.157 "bdev_lvol_resize", 00:06:12.157 "bdev_lvol_decouple_parent", 00:06:12.157 "bdev_lvol_inflate", 00:06:12.157 "bdev_lvol_rename", 00:06:12.157 "bdev_lvol_clone_bdev", 00:06:12.157 "bdev_lvol_clone", 00:06:12.157 "bdev_lvol_snapshot", 00:06:12.158 "bdev_lvol_create", 00:06:12.158 "bdev_lvol_delete_lvstore", 00:06:12.158 "bdev_lvol_rename_lvstore", 00:06:12.158 "bdev_lvol_create_lvstore", 00:06:12.158 "bdev_raid_set_options", 00:06:12.158 "bdev_raid_remove_base_bdev", 00:06:12.158 "bdev_raid_add_base_bdev", 00:06:12.158 "bdev_raid_delete", 00:06:12.158 "bdev_raid_create", 00:06:12.158 "bdev_raid_get_bdevs", 00:06:12.158 "bdev_error_inject_error", 00:06:12.158 "bdev_error_delete", 00:06:12.158 "bdev_error_create", 00:06:12.158 "bdev_split_delete", 00:06:12.158 "bdev_split_create", 00:06:12.158 "bdev_delay_delete", 00:06:12.158 "bdev_delay_create", 00:06:12.158 "bdev_delay_update_latency", 00:06:12.158 "bdev_zone_block_delete", 00:06:12.158 "bdev_zone_block_create", 00:06:12.158 "blobfs_create", 00:06:12.158 "blobfs_detect", 00:06:12.158 "blobfs_set_cache_size", 00:06:12.158 "bdev_aio_delete", 00:06:12.158 "bdev_aio_rescan", 00:06:12.158 "bdev_aio_create", 00:06:12.158 "bdev_ftl_set_property", 00:06:12.158 "bdev_ftl_get_properties", 00:06:12.158 "bdev_ftl_get_stats", 00:06:12.158 "bdev_ftl_unmap", 00:06:12.158 "bdev_ftl_unload", 00:06:12.158 "bdev_ftl_delete", 00:06:12.158 "bdev_ftl_load", 00:06:12.158 "bdev_ftl_create", 00:06:12.158 "bdev_virtio_attach_controller", 00:06:12.158 "bdev_virtio_scsi_get_devices", 00:06:12.158 "bdev_virtio_detach_controller", 00:06:12.158 "bdev_virtio_blk_set_hotplug", 00:06:12.158 "bdev_iscsi_delete", 00:06:12.158 "bdev_iscsi_create", 00:06:12.158 "bdev_iscsi_set_options", 00:06:12.158 "accel_error_inject_error", 00:06:12.158 "ioat_scan_accel_module", 00:06:12.158 "dsa_scan_accel_module", 00:06:12.158 "iaa_scan_accel_module", 00:06:12.158 "vfu_virtio_create_scsi_endpoint", 00:06:12.158 "vfu_virtio_scsi_remove_target", 00:06:12.158 "vfu_virtio_scsi_add_target", 00:06:12.158 "vfu_virtio_create_blk_endpoint", 00:06:12.158 "vfu_virtio_delete_endpoint", 00:06:12.158 "keyring_file_remove_key", 00:06:12.158 "keyring_file_add_key", 00:06:12.158 "keyring_linux_set_options", 00:06:12.158 "iscsi_get_histogram", 00:06:12.158 "iscsi_enable_histogram", 00:06:12.158 "iscsi_set_options", 00:06:12.158 "iscsi_get_auth_groups", 00:06:12.158 "iscsi_auth_group_remove_secret", 00:06:12.158 "iscsi_auth_group_add_secret", 00:06:12.158 "iscsi_delete_auth_group", 00:06:12.158 "iscsi_create_auth_group", 00:06:12.158 "iscsi_set_discovery_auth", 00:06:12.158 "iscsi_get_options", 00:06:12.158 "iscsi_target_node_request_logout", 00:06:12.158 "iscsi_target_node_set_redirect", 00:06:12.158 "iscsi_target_node_set_auth", 00:06:12.158 "iscsi_target_node_add_lun", 00:06:12.158 "iscsi_get_stats", 00:06:12.158 "iscsi_get_connections", 00:06:12.158 "iscsi_portal_group_set_auth", 00:06:12.158 "iscsi_start_portal_group", 00:06:12.158 "iscsi_delete_portal_group", 00:06:12.158 "iscsi_create_portal_group", 00:06:12.158 "iscsi_get_portal_groups", 00:06:12.158 "iscsi_delete_target_node", 00:06:12.158 "iscsi_target_node_remove_pg_ig_maps", 00:06:12.158 "iscsi_target_node_add_pg_ig_maps", 00:06:12.158 "iscsi_create_target_node", 00:06:12.158 "iscsi_get_target_nodes", 00:06:12.158 "iscsi_delete_initiator_group", 00:06:12.158 "iscsi_initiator_group_remove_initiators", 00:06:12.158 "iscsi_initiator_group_add_initiators", 00:06:12.158 "iscsi_create_initiator_group", 00:06:12.158 "iscsi_get_initiator_groups", 00:06:12.158 "nvmf_set_crdt", 00:06:12.158 "nvmf_set_config", 00:06:12.158 "nvmf_set_max_subsystems", 00:06:12.158 "nvmf_stop_mdns_prr", 00:06:12.158 "nvmf_publish_mdns_prr", 00:06:12.158 "nvmf_subsystem_get_listeners", 00:06:12.158 "nvmf_subsystem_get_qpairs", 00:06:12.158 "nvmf_subsystem_get_controllers", 00:06:12.158 "nvmf_get_stats", 00:06:12.158 "nvmf_get_transports", 00:06:12.158 "nvmf_create_transport", 00:06:12.158 "nvmf_get_targets", 00:06:12.158 "nvmf_delete_target", 00:06:12.158 "nvmf_create_target", 00:06:12.158 "nvmf_subsystem_allow_any_host", 00:06:12.158 "nvmf_subsystem_remove_host", 00:06:12.158 "nvmf_subsystem_add_host", 00:06:12.158 "nvmf_ns_remove_host", 00:06:12.158 "nvmf_ns_add_host", 00:06:12.158 "nvmf_subsystem_remove_ns", 00:06:12.158 "nvmf_subsystem_add_ns", 00:06:12.158 "nvmf_subsystem_listener_set_ana_state", 00:06:12.158 "nvmf_discovery_get_referrals", 00:06:12.158 "nvmf_discovery_remove_referral", 00:06:12.158 "nvmf_discovery_add_referral", 00:06:12.158 "nvmf_subsystem_remove_listener", 00:06:12.158 "nvmf_subsystem_add_listener", 00:06:12.158 "nvmf_delete_subsystem", 00:06:12.158 "nvmf_create_subsystem", 00:06:12.158 "nvmf_get_subsystems", 00:06:12.158 "env_dpdk_get_mem_stats", 00:06:12.158 "nbd_get_disks", 00:06:12.158 "nbd_stop_disk", 00:06:12.158 "nbd_start_disk", 00:06:12.158 "ublk_recover_disk", 00:06:12.158 "ublk_get_disks", 00:06:12.158 "ublk_stop_disk", 00:06:12.158 "ublk_start_disk", 00:06:12.158 "ublk_destroy_target", 00:06:12.158 "ublk_create_target", 00:06:12.158 "virtio_blk_create_transport", 00:06:12.158 "virtio_blk_get_transports", 00:06:12.158 "vhost_controller_set_coalescing", 00:06:12.158 "vhost_get_controllers", 00:06:12.158 "vhost_delete_controller", 00:06:12.158 "vhost_create_blk_controller", 00:06:12.158 "vhost_scsi_controller_remove_target", 00:06:12.158 "vhost_scsi_controller_add_target", 00:06:12.158 "vhost_start_scsi_controller", 00:06:12.158 "vhost_create_scsi_controller", 00:06:12.158 "thread_set_cpumask", 00:06:12.158 "framework_get_scheduler", 00:06:12.158 "framework_set_scheduler", 00:06:12.158 "framework_get_reactors", 00:06:12.158 "thread_get_io_channels", 00:06:12.158 "thread_get_pollers", 00:06:12.158 "thread_get_stats", 00:06:12.158 "framework_monitor_context_switch", 00:06:12.158 "spdk_kill_instance", 00:06:12.158 "log_enable_timestamps", 00:06:12.158 "log_get_flags", 00:06:12.158 "log_clear_flag", 00:06:12.158 "log_set_flag", 00:06:12.158 "log_get_level", 00:06:12.158 "log_set_level", 00:06:12.158 "log_get_print_level", 00:06:12.158 "log_set_print_level", 00:06:12.158 "framework_enable_cpumask_locks", 00:06:12.158 "framework_disable_cpumask_locks", 00:06:12.158 "framework_wait_init", 00:06:12.158 "framework_start_init", 00:06:12.158 "scsi_get_devices", 00:06:12.158 "bdev_get_histogram", 00:06:12.158 "bdev_enable_histogram", 00:06:12.158 "bdev_set_qos_limit", 00:06:12.158 "bdev_set_qd_sampling_period", 00:06:12.158 "bdev_get_bdevs", 00:06:12.158 "bdev_reset_iostat", 00:06:12.158 "bdev_get_iostat", 00:06:12.158 "bdev_examine", 00:06:12.158 "bdev_wait_for_examine", 00:06:12.158 "bdev_set_options", 00:06:12.158 "notify_get_notifications", 00:06:12.158 "notify_get_types", 00:06:12.158 "accel_get_stats", 00:06:12.158 "accel_set_options", 00:06:12.158 "accel_set_driver", 00:06:12.158 "accel_crypto_key_destroy", 00:06:12.158 "accel_crypto_keys_get", 00:06:12.158 "accel_crypto_key_create", 00:06:12.158 "accel_assign_opc", 00:06:12.158 "accel_get_module_info", 00:06:12.158 "accel_get_opc_assignments", 00:06:12.158 "vmd_rescan", 00:06:12.158 "vmd_remove_device", 00:06:12.158 "vmd_enable", 00:06:12.158 "sock_get_default_impl", 00:06:12.158 "sock_set_default_impl", 00:06:12.158 "sock_impl_set_options", 00:06:12.158 "sock_impl_get_options", 00:06:12.158 "iobuf_get_stats", 00:06:12.158 "iobuf_set_options", 00:06:12.158 "keyring_get_keys", 00:06:12.158 "framework_get_pci_devices", 00:06:12.158 "framework_get_config", 00:06:12.158 "framework_get_subsystems", 00:06:12.158 "vfu_tgt_set_base_path", 00:06:12.158 "trace_get_info", 00:06:12.158 "trace_get_tpoint_group_mask", 00:06:12.158 "trace_disable_tpoint_group", 00:06:12.158 "trace_enable_tpoint_group", 00:06:12.158 "trace_clear_tpoint_mask", 00:06:12.158 "trace_set_tpoint_mask", 00:06:12.158 "spdk_get_version", 00:06:12.158 "rpc_get_methods" 00:06:12.158 ] 00:06:12.158 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.158 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:12.158 19:53:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3066475 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3066475 ']' 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3066475 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3066475 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3066475' 00:06:12.158 killing process with pid 3066475 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3066475 00:06:12.158 19:53:59 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3066475 00:06:12.724 00:06:12.724 real 0m1.219s 00:06:12.724 user 0m2.178s 00:06:12.724 sys 0m0.443s 00:06:12.724 19:54:00 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.724 19:54:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.724 ************************************ 00:06:12.724 END TEST spdkcli_tcp 00:06:12.724 ************************************ 00:06:12.724 19:54:00 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.724 19:54:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.724 19:54:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.724 19:54:00 -- common/autotest_common.sh@10 -- # set +x 00:06:12.724 ************************************ 00:06:12.724 START TEST dpdk_mem_utility 00:06:12.724 ************************************ 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:12.724 * Looking for test storage... 00:06:12.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:12.724 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:12.724 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3066712 00:06:12.724 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.724 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3066712 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3066712 ']' 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:12.724 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.724 [2024-07-13 19:54:00.354392] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:12.724 [2024-07-13 19:54:00.354476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066712 ] 00:06:12.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.982 [2024-07-13 19:54:00.414439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.982 [2024-07-13 19:54:00.502753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:13.241 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:13.241 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.241 { 00:06:13.241 "filename": "/tmp/spdk_mem_dump.txt" 00:06:13.241 } 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.241 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:13.241 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:13.241 1 heaps totaling size 814.000000 MiB 00:06:13.241 size: 814.000000 MiB heap id: 0 00:06:13.241 end heaps---------- 00:06:13.241 8 mempools totaling size 598.116089 MiB 00:06:13.241 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:13.241 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:13.241 size: 84.521057 MiB name: bdev_io_3066712 00:06:13.241 size: 51.011292 MiB name: evtpool_3066712 00:06:13.241 size: 50.003479 MiB name: msgpool_3066712 00:06:13.241 size: 21.763794 MiB name: PDU_Pool 00:06:13.241 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:13.241 size: 0.026123 MiB name: Session_Pool 00:06:13.241 end mempools------- 00:06:13.241 6 memzones totaling size 4.142822 MiB 00:06:13.241 size: 1.000366 MiB name: RG_ring_0_3066712 00:06:13.241 size: 1.000366 MiB name: RG_ring_1_3066712 00:06:13.241 size: 1.000366 MiB name: RG_ring_4_3066712 00:06:13.241 size: 1.000366 MiB name: RG_ring_5_3066712 00:06:13.241 size: 0.125366 MiB name: RG_ring_2_3066712 00:06:13.241 size: 0.015991 MiB name: RG_ring_3_3066712 00:06:13.241 end memzones------- 00:06:13.241 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:13.241 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:13.241 list of free elements. size: 12.519348 MiB 00:06:13.241 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:13.241 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:13.241 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:13.241 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:13.241 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:13.241 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:13.241 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:13.241 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:13.241 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:13.241 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:13.241 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:13.241 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:13.241 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:13.241 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:13.241 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:13.241 list of standard malloc elements. size: 199.218079 MiB 00:06:13.241 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:13.241 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:13.241 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:13.241 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:13.241 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:13.241 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:13.241 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:13.241 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:13.241 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:13.241 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:13.241 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:13.241 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:13.241 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:13.241 list of memzone associated elements. size: 602.262573 MiB 00:06:13.241 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:13.241 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:13.241 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:13.241 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:13.241 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:13.241 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3066712_0 00:06:13.241 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:13.241 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3066712_0 00:06:13.241 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:13.241 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3066712_0 00:06:13.241 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:13.241 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:13.241 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:13.241 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:13.241 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:13.241 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3066712 00:06:13.241 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:13.241 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3066712 00:06:13.241 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:13.241 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3066712 00:06:13.241 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:13.241 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:13.241 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:13.241 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:13.241 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:13.241 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:13.241 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:13.241 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:13.241 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:13.241 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3066712 00:06:13.241 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:13.241 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3066712 00:06:13.241 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:13.241 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3066712 00:06:13.241 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:13.241 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3066712 00:06:13.241 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:13.241 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3066712 00:06:13.241 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:13.241 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:13.241 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:13.241 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:13.241 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:13.241 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:13.241 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:13.241 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3066712 00:06:13.241 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:13.241 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:13.241 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:13.241 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:13.241 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:13.241 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3066712 00:06:13.241 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:13.241 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:13.241 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:13.241 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3066712 00:06:13.241 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:13.241 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3066712 00:06:13.241 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:13.241 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:13.241 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:13.241 19:54:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3066712 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3066712 ']' 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3066712 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3066712 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3066712' 00:06:13.241 killing process with pid 3066712 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3066712 00:06:13.241 19:54:00 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3066712 00:06:13.807 00:06:13.807 real 0m1.037s 00:06:13.807 user 0m1.019s 00:06:13.807 sys 0m0.394s 00:06:13.807 19:54:01 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.807 19:54:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.807 ************************************ 00:06:13.807 END TEST dpdk_mem_utility 00:06:13.807 ************************************ 00:06:13.807 19:54:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:13.807 19:54:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:13.807 19:54:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.807 19:54:01 -- common/autotest_common.sh@10 -- # set +x 00:06:13.807 ************************************ 00:06:13.807 START TEST event 00:06:13.807 ************************************ 00:06:13.807 19:54:01 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:13.807 * Looking for test storage... 00:06:13.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.807 19:54:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:13.807 19:54:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:13.807 19:54:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.807 19:54:01 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:13.807 19:54:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.807 19:54:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.807 ************************************ 00:06:13.807 START TEST event_perf 00:06:13.807 ************************************ 00:06:13.807 19:54:01 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:13.807 Running I/O for 1 seconds...[2024-07-13 19:54:01.425030] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:13.807 [2024-07-13 19:54:01.425098] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3066957 ] 00:06:13.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.064 [2024-07-13 19:54:01.487087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.064 [2024-07-13 19:54:01.583728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.064 [2024-07-13 19:54:01.583805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.064 [2024-07-13 19:54:01.583905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.064 [2024-07-13 19:54:01.583908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.435 Running I/O for 1 seconds... 00:06:15.435 lcore 0: 237311 00:06:15.435 lcore 1: 237309 00:06:15.435 lcore 2: 237309 00:06:15.435 lcore 3: 237310 00:06:15.435 done. 00:06:15.435 00:06:15.435 real 0m1.252s 00:06:15.435 user 0m4.164s 00:06:15.435 sys 0m0.082s 00:06:15.435 19:54:02 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.435 19:54:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.435 ************************************ 00:06:15.435 END TEST event_perf 00:06:15.435 ************************************ 00:06:15.435 19:54:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:15.435 19:54:02 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:15.435 19:54:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.435 19:54:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.435 ************************************ 00:06:15.435 START TEST event_reactor 00:06:15.435 ************************************ 00:06:15.435 19:54:02 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:15.435 [2024-07-13 19:54:02.723822] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:15.435 [2024-07-13 19:54:02.723898] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067149 ] 00:06:15.435 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.435 [2024-07-13 19:54:02.785263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.435 [2024-07-13 19:54:02.876574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.366 test_start 00:06:16.366 oneshot 00:06:16.366 tick 100 00:06:16.366 tick 100 00:06:16.366 tick 250 00:06:16.366 tick 100 00:06:16.366 tick 100 00:06:16.366 tick 100 00:06:16.366 tick 250 00:06:16.366 tick 500 00:06:16.366 tick 100 00:06:16.366 tick 100 00:06:16.366 tick 250 00:06:16.366 tick 100 00:06:16.366 tick 100 00:06:16.366 test_end 00:06:16.366 00:06:16.366 real 0m1.245s 00:06:16.366 user 0m1.156s 00:06:16.366 sys 0m0.083s 00:06:16.366 19:54:03 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.366 19:54:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:16.366 ************************************ 00:06:16.366 END TEST event_reactor 00:06:16.366 ************************************ 00:06:16.366 19:54:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.366 19:54:03 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:16.366 19:54:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.366 19:54:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.366 ************************************ 00:06:16.366 START TEST event_reactor_perf 00:06:16.366 ************************************ 00:06:16.366 19:54:04 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:16.366 [2024-07-13 19:54:04.017656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:16.366 [2024-07-13 19:54:04.017719] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067305 ] 00:06:16.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.624 [2024-07-13 19:54:04.079336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.624 [2024-07-13 19:54:04.169253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.997 test_start 00:06:17.997 test_end 00:06:17.997 Performance: 352504 events per second 00:06:17.997 00:06:17.997 real 0m1.247s 00:06:17.997 user 0m1.155s 00:06:17.997 sys 0m0.087s 00:06:17.997 19:54:05 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.997 19:54:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.997 ************************************ 00:06:17.997 END TEST event_reactor_perf 00:06:17.997 ************************************ 00:06:17.998 19:54:05 event -- event/event.sh@49 -- # uname -s 00:06:17.998 19:54:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:17.998 19:54:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:17.998 19:54:05 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.998 19:54:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.998 19:54:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.998 ************************************ 00:06:17.998 START TEST event_scheduler 00:06:17.998 ************************************ 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:17.998 * Looking for test storage... 00:06:17.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3067548 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3067548 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3067548 ']' 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.998 [2024-07-13 19:54:05.399499] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:17.998 [2024-07-13 19:54:05.399575] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067548 ] 00:06:17.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.998 [2024-07-13 19:54:05.457348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.998 [2024-07-13 19:54:05.545776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.998 [2024-07-13 19:54:05.545829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.998 [2024-07-13 19:54:05.545895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.998 [2024-07-13 19:54:05.545899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.998 POWER: Env isn't set yet! 00:06:17.998 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:17.998 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:17.998 POWER: Cannot get available frequencies of lcore 0 00:06:17.998 POWER: Attempting to initialise PSTAT power management... 00:06:17.998 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:17.998 POWER: Initialized successfully for lcore 0 power management 00:06:17.998 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:17.998 POWER: Initialized successfully for lcore 1 power management 00:06:17.998 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:17.998 POWER: Initialized successfully for lcore 2 power management 00:06:17.998 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:17.998 POWER: Initialized successfully for lcore 3 power management 00:06:17.998 [2024-07-13 19:54:05.642083] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.998 [2024-07-13 19:54:05.642100] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.998 [2024-07-13 19:54:05.642119] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.998 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.998 19:54:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 [2024-07-13 19:54:05.749463] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:18.257 19:54:05 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:18.257 19:54:05 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.257 19:54:05 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 ************************************ 00:06:18.257 START TEST scheduler_create_thread 00:06:18.257 ************************************ 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 2 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 3 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 4 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 5 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 6 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 7 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 8 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 9 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 10 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.257 19:54:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.824 19:54:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.824 19:54:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.824 19:54:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.824 19:54:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.197 19:54:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.197 19:54:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:20.197 19:54:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:20.197 19:54:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.197 19:54:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.569 19:54:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.569 00:06:21.569 real 0m3.096s 00:06:21.569 user 0m0.009s 00:06:21.569 sys 0m0.004s 00:06:21.569 19:54:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.569 19:54:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.569 ************************************ 00:06:21.569 END TEST scheduler_create_thread 00:06:21.569 ************************************ 00:06:21.569 19:54:08 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:21.569 19:54:08 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3067548 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3067548 ']' 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3067548 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3067548 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3067548' 00:06:21.569 killing process with pid 3067548 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3067548 00:06:21.569 19:54:08 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3067548 00:06:21.828 [2024-07-13 19:54:09.253634] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:21.828 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:21.828 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:21.828 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:21.828 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:21.828 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:21.828 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:21.828 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:21.828 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:22.086 00:06:22.086 real 0m4.185s 00:06:22.086 user 0m6.805s 00:06:22.086 sys 0m0.344s 00:06:22.086 19:54:09 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.086 19:54:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.086 ************************************ 00:06:22.086 END TEST event_scheduler 00:06:22.086 ************************************ 00:06:22.086 19:54:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:22.086 19:54:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:22.086 19:54:09 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:22.086 19:54:09 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.086 19:54:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.086 ************************************ 00:06:22.086 START TEST app_repeat 00:06:22.086 ************************************ 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3068567 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3068567' 00:06:22.086 Process app_repeat pid: 3068567 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:22.086 spdk_app_start Round 0 00:06:22.086 19:54:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3068567 /var/tmp/spdk-nbd.sock 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3068567 ']' 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:22.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:22.086 19:54:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.086 [2024-07-13 19:54:09.556301] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:22.086 [2024-07-13 19:54:09.556354] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068567 ] 00:06:22.086 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.086 [2024-07-13 19:54:09.620004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.086 [2024-07-13 19:54:09.718892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.086 [2024-07-13 19:54:09.718897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.343 19:54:09 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.344 19:54:09 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:22.344 19:54:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.601 Malloc0 00:06:22.601 19:54:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.859 Malloc1 00:06:22.859 19:54:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.859 19:54:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.117 /dev/nbd0 00:06:23.117 19:54:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.117 19:54:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.117 1+0 records in 00:06:23.117 1+0 records out 00:06:23.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017179 s, 23.8 MB/s 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.117 19:54:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.117 19:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.117 19:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.117 19:54:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.375 /dev/nbd1 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.375 1+0 records in 00:06:23.375 1+0 records out 00:06:23.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185343 s, 22.1 MB/s 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:23.375 19:54:10 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.375 19:54:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.633 19:54:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.633 { 00:06:23.633 "nbd_device": "/dev/nbd0", 00:06:23.633 "bdev_name": "Malloc0" 00:06:23.633 }, 00:06:23.633 { 00:06:23.633 "nbd_device": "/dev/nbd1", 00:06:23.633 "bdev_name": "Malloc1" 00:06:23.633 } 00:06:23.633 ]' 00:06:23.633 19:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.633 { 00:06:23.633 "nbd_device": "/dev/nbd0", 00:06:23.634 "bdev_name": "Malloc0" 00:06:23.634 }, 00:06:23.634 { 00:06:23.634 "nbd_device": "/dev/nbd1", 00:06:23.634 "bdev_name": "Malloc1" 00:06:23.634 } 00:06:23.634 ]' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.634 /dev/nbd1' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.634 /dev/nbd1' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.634 256+0 records in 00:06:23.634 256+0 records out 00:06:23.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503518 s, 208 MB/s 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.634 256+0 records in 00:06:23.634 256+0 records out 00:06:23.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235174 s, 44.6 MB/s 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.634 256+0 records in 00:06:23.634 256+0 records out 00:06:23.634 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221588 s, 47.3 MB/s 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.634 19:54:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.892 19:54:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.892 19:54:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.892 19:54:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.892 19:54:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.892 19:54:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.892 19:54:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.150 19:54:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.150 19:54:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.150 19:54:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.150 19:54:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.408 19:54:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.666 19:54:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.666 19:54:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.924 19:54:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.182 [2024-07-13 19:54:12.602955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.182 [2024-07-13 19:54:12.693063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.182 [2024-07-13 19:54:12.693068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.182 [2024-07-13 19:54:12.751462] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.182 [2024-07-13 19:54:12.751532] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.459 19:54:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.459 19:54:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:28.459 spdk_app_start Round 1 00:06:28.459 19:54:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3068567 /var/tmp/spdk-nbd.sock 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3068567 ']' 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.459 19:54:15 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:28.459 19:54:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.459 Malloc0 00:06:28.459 19:54:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.724 Malloc1 00:06:28.724 19:54:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.724 19:54:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.027 /dev/nbd0 00:06:29.027 19:54:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.027 19:54:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.027 1+0 records in 00:06:29.027 1+0 records out 00:06:29.027 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244722 s, 16.7 MB/s 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:29.027 19:54:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:29.027 19:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.027 19:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.027 19:54:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.286 /dev/nbd1 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.286 1+0 records in 00:06:29.286 1+0 records out 00:06:29.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185888 s, 22.0 MB/s 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:29.286 19:54:16 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.286 19:54:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.544 { 00:06:29.544 "nbd_device": "/dev/nbd0", 00:06:29.544 "bdev_name": "Malloc0" 00:06:29.544 }, 00:06:29.544 { 00:06:29.544 "nbd_device": "/dev/nbd1", 00:06:29.544 "bdev_name": "Malloc1" 00:06:29.544 } 00:06:29.544 ]' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.544 { 00:06:29.544 "nbd_device": "/dev/nbd0", 00:06:29.544 "bdev_name": "Malloc0" 00:06:29.544 }, 00:06:29.544 { 00:06:29.544 "nbd_device": "/dev/nbd1", 00:06:29.544 "bdev_name": "Malloc1" 00:06:29.544 } 00:06:29.544 ]' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.544 /dev/nbd1' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.544 /dev/nbd1' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.544 19:54:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.544 256+0 records in 00:06:29.544 256+0 records out 00:06:29.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496657 s, 211 MB/s 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.544 256+0 records in 00:06:29.544 256+0 records out 00:06:29.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233542 s, 44.9 MB/s 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.544 256+0 records in 00:06:29.544 256+0 records out 00:06:29.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251502 s, 41.7 MB/s 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.544 19:54:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.802 19:54:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.802 19:54:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.802 19:54:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.802 19:54:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.802 19:54:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.803 19:54:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.803 19:54:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.803 19:54:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.803 19:54:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.803 19:54:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.061 19:54:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.319 19:54:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.319 19:54:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.577 19:54:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.836 [2024-07-13 19:54:18.388171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.836 [2024-07-13 19:54:18.476961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.836 [2024-07-13 19:54:18.476966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.095 [2024-07-13 19:54:18.540152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.095 [2024-07-13 19:54:18.540250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.622 19:54:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.622 19:54:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:33.622 spdk_app_start Round 2 00:06:33.622 19:54:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3068567 /var/tmp/spdk-nbd.sock 00:06:33.622 19:54:21 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3068567 ']' 00:06:33.622 19:54:21 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.622 19:54:21 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:33.622 19:54:21 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.622 19:54:21 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:33.622 19:54:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.879 19:54:21 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.879 19:54:21 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:33.879 19:54:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.137 Malloc0 00:06:34.137 19:54:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:34.395 Malloc1 00:06:34.395 19:54:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.395 19:54:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:34.653 /dev/nbd0 00:06:34.653 19:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.653 19:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.653 1+0 records in 00:06:34.653 1+0 records out 00:06:34.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191734 s, 21.4 MB/s 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:34.653 19:54:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:34.653 19:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.653 19:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.653 19:54:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:34.911 /dev/nbd1 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:34.911 1+0 records in 00:06:34.911 1+0 records out 00:06:34.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198923 s, 20.6 MB/s 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:34.911 19:54:22 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.911 19:54:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:35.169 { 00:06:35.169 "nbd_device": "/dev/nbd0", 00:06:35.169 "bdev_name": "Malloc0" 00:06:35.169 }, 00:06:35.169 { 00:06:35.169 "nbd_device": "/dev/nbd1", 00:06:35.169 "bdev_name": "Malloc1" 00:06:35.169 } 00:06:35.169 ]' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:35.169 { 00:06:35.169 "nbd_device": "/dev/nbd0", 00:06:35.169 "bdev_name": "Malloc0" 00:06:35.169 }, 00:06:35.169 { 00:06:35.169 "nbd_device": "/dev/nbd1", 00:06:35.169 "bdev_name": "Malloc1" 00:06:35.169 } 00:06:35.169 ]' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:35.169 /dev/nbd1' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:35.169 /dev/nbd1' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:35.169 256+0 records in 00:06:35.169 256+0 records out 00:06:35.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00391464 s, 268 MB/s 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:35.169 256+0 records in 00:06:35.169 256+0 records out 00:06:35.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212287 s, 49.4 MB/s 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:35.169 256+0 records in 00:06:35.169 256+0 records out 00:06:35.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247076 s, 42.4 MB/s 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.169 19:54:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.427 19:54:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:35.993 19:54:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:35.993 19:54:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:36.251 19:54:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:36.508 [2024-07-13 19:54:24.121275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:36.765 [2024-07-13 19:54:24.211582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.765 [2024-07-13 19:54:24.211586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.765 [2024-07-13 19:54:24.274389] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:36.765 [2024-07-13 19:54:24.274467] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:39.290 19:54:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3068567 /var/tmp/spdk-nbd.sock 00:06:39.290 19:54:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3068567 ']' 00:06:39.290 19:54:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.290 19:54:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:39.290 19:54:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.290 19:54:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:39.290 19:54:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:39.548 19:54:27 event.app_repeat -- event/event.sh@39 -- # killprocess 3068567 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3068567 ']' 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3068567 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3068567 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3068567' 00:06:39.548 killing process with pid 3068567 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3068567 00:06:39.548 19:54:27 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3068567 00:06:39.806 spdk_app_start is called in Round 0. 00:06:39.806 Shutdown signal received, stop current app iteration 00:06:39.806 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:39.806 spdk_app_start is called in Round 1. 00:06:39.806 Shutdown signal received, stop current app iteration 00:06:39.806 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:39.806 spdk_app_start is called in Round 2. 00:06:39.806 Shutdown signal received, stop current app iteration 00:06:39.806 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 reinitialization... 00:06:39.806 spdk_app_start is called in Round 3. 00:06:39.806 Shutdown signal received, stop current app iteration 00:06:39.806 19:54:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:39.806 19:54:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:39.806 00:06:39.806 real 0m17.845s 00:06:39.806 user 0m38.748s 00:06:39.806 sys 0m3.262s 00:06:39.806 19:54:27 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:39.806 19:54:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.807 ************************************ 00:06:39.807 END TEST app_repeat 00:06:39.807 ************************************ 00:06:39.807 19:54:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:39.807 19:54:27 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:39.807 19:54:27 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.807 19:54:27 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.807 19:54:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.807 ************************************ 00:06:39.807 START TEST cpu_locks 00:06:39.807 ************************************ 00:06:39.807 19:54:27 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:40.065 * Looking for test storage... 00:06:40.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:40.065 19:54:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:40.065 19:54:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:40.065 19:54:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:40.065 19:54:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:40.065 19:54:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:40.065 19:54:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.065 19:54:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.065 ************************************ 00:06:40.065 START TEST default_locks 00:06:40.065 ************************************ 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3070916 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3070916 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3070916 ']' 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:40.065 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.065 [2024-07-13 19:54:27.558913] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:40.065 [2024-07-13 19:54:27.559005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070916 ] 00:06:40.065 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.065 [2024-07-13 19:54:27.619862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.065 [2024-07-13 19:54:27.710863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.323 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:40.323 19:54:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:06:40.323 19:54:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3070916 00:06:40.323 19:54:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3070916 00:06:40.323 19:54:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.888 lslocks: write error 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3070916 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3070916 ']' 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3070916 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3070916 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:40.888 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3070916' 00:06:40.888 killing process with pid 3070916 00:06:40.889 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3070916 00:06:40.889 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3070916 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3070916 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3070916 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3070916 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3070916 ']' 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3070916) - No such process 00:06:41.148 ERROR: process (pid: 3070916) is no longer running 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.148 00:06:41.148 real 0m1.201s 00:06:41.148 user 0m1.165s 00:06:41.148 sys 0m0.510s 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.148 19:54:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.148 ************************************ 00:06:41.148 END TEST default_locks 00:06:41.148 ************************************ 00:06:41.148 19:54:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:41.148 19:54:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:41.148 19:54:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.148 19:54:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.148 ************************************ 00:06:41.148 START TEST default_locks_via_rpc 00:06:41.148 ************************************ 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3071084 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3071084 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3071084 ']' 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:41.148 19:54:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.407 [2024-07-13 19:54:28.806886] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:41.407 [2024-07-13 19:54:28.806968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071084 ] 00:06:41.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.407 [2024-07-13 19:54:28.863916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.407 [2024-07-13 19:54:28.951892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3071084 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3071084 00:06:41.666 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3071084 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3071084 ']' 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3071084 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3071084 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3071084' 00:06:41.925 killing process with pid 3071084 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3071084 00:06:41.925 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3071084 00:06:42.492 00:06:42.492 real 0m1.164s 00:06:42.492 user 0m1.079s 00:06:42.492 sys 0m0.533s 00:06:42.492 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.492 19:54:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.492 ************************************ 00:06:42.492 END TEST default_locks_via_rpc 00:06:42.492 ************************************ 00:06:42.492 19:54:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.492 19:54:29 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.492 19:54:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.492 19:54:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.492 ************************************ 00:06:42.492 START TEST non_locking_app_on_locked_coremask 00:06:42.492 ************************************ 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3071244 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3071244 /var/tmp/spdk.sock 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3071244 ']' 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.492 19:54:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.492 [2024-07-13 19:54:30.015912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:42.492 [2024-07-13 19:54:30.016006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071244 ] 00:06:42.492 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.492 [2024-07-13 19:54:30.077513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.751 [2024-07-13 19:54:30.166742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.009 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.009 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.009 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3071366 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3071366 /var/tmp/spdk2.sock 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3071366 ']' 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:43.010 19:54:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.010 [2024-07-13 19:54:30.468835] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:43.010 [2024-07-13 19:54:30.468947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071366 ] 00:06:43.010 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.010 [2024-07-13 19:54:30.561597] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.010 [2024-07-13 19:54:30.561629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.268 [2024-07-13 19:54:30.741764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.834 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.834 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:43.834 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3071244 00:06:43.834 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3071244 00:06:43.834 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.399 lslocks: write error 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3071244 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3071244 ']' 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3071244 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3071244 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3071244' 00:06:44.399 killing process with pid 3071244 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3071244 00:06:44.399 19:54:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3071244 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3071366 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3071366 ']' 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3071366 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3071366 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3071366' 00:06:45.348 killing process with pid 3071366 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3071366 00:06:45.348 19:54:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3071366 00:06:45.605 00:06:45.605 real 0m3.150s 00:06:45.605 user 0m3.322s 00:06:45.605 sys 0m1.018s 00:06:45.605 19:54:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.605 19:54:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.605 ************************************ 00:06:45.605 END TEST non_locking_app_on_locked_coremask 00:06:45.605 ************************************ 00:06:45.605 19:54:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:45.605 19:54:33 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.605 19:54:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.605 19:54:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.605 ************************************ 00:06:45.605 START TEST locking_app_on_unlocked_coremask 00:06:45.605 ************************************ 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3071679 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3071679 /var/tmp/spdk.sock 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3071679 ']' 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:45.605 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.605 [2024-07-13 19:54:33.219992] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:45.605 [2024-07-13 19:54:33.220078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071679 ] 00:06:45.605 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.863 [2024-07-13 19:54:33.283942] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.863 [2024-07-13 19:54:33.283985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.863 [2024-07-13 19:54:33.371996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3071799 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3071799 /var/tmp/spdk2.sock 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3071799 ']' 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.120 19:54:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.120 [2024-07-13 19:54:33.683881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:46.120 [2024-07-13 19:54:33.683963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3071799 ] 00:06:46.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.382 [2024-07-13 19:54:33.779733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.382 [2024-07-13 19:54:33.964593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.315 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.315 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:47.315 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3071799 00:06:47.315 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3071799 00:06:47.315 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.574 lslocks: write error 00:06:47.574 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3071679 00:06:47.574 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3071679 ']' 00:06:47.574 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3071679 00:06:47.574 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:47.574 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:47.574 19:54:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3071679 00:06:47.574 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:47.574 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:47.574 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3071679' 00:06:47.574 killing process with pid 3071679 00:06:47.574 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3071679 00:06:47.574 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3071679 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3071799 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3071799 ']' 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3071799 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3071799 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3071799' 00:06:48.509 killing process with pid 3071799 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3071799 00:06:48.509 19:54:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3071799 00:06:48.768 00:06:48.768 real 0m3.106s 00:06:48.768 user 0m3.232s 00:06:48.768 sys 0m1.017s 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.768 ************************************ 00:06:48.768 END TEST locking_app_on_unlocked_coremask 00:06:48.768 ************************************ 00:06:48.768 19:54:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:48.768 19:54:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:48.768 19:54:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.768 19:54:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.768 ************************************ 00:06:48.768 START TEST locking_app_on_locked_coremask 00:06:48.768 ************************************ 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3072114 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3072114 /var/tmp/spdk.sock 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3072114 ']' 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:48.768 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.768 [2024-07-13 19:54:36.365486] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:48.768 [2024-07-13 19:54:36.365550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072114 ] 00:06:48.768 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.768 [2024-07-13 19:54:36.425998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.026 [2024-07-13 19:54:36.516336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3072118 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3072118 /var/tmp/spdk2.sock 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3072118 /var/tmp/spdk2.sock 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3072118 /var/tmp/spdk2.sock 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3072118 ']' 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:49.285 19:54:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.285 [2024-07-13 19:54:36.822395] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:49.285 [2024-07-13 19:54:36.822469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072118 ] 00:06:49.285 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.285 [2024-07-13 19:54:36.916201] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3072114 has claimed it. 00:06:49.285 [2024-07-13 19:54:36.916260] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3072118) - No such process 00:06:50.219 ERROR: process (pid: 3072118) is no longer running 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3072114 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3072114 00:06:50.219 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.478 lslocks: write error 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3072114 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3072114 ']' 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3072114 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3072114 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3072114' 00:06:50.478 killing process with pid 3072114 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3072114 00:06:50.478 19:54:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3072114 00:06:50.737 00:06:50.737 real 0m2.055s 00:06:50.737 user 0m2.212s 00:06:50.737 sys 0m0.654s 00:06:50.737 19:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:50.737 19:54:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.737 ************************************ 00:06:50.737 END TEST locking_app_on_locked_coremask 00:06:50.737 ************************************ 00:06:50.996 19:54:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:50.996 19:54:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:50.996 19:54:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.996 19:54:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.996 ************************************ 00:06:50.996 START TEST locking_overlapped_coremask 00:06:50.996 ************************************ 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3072402 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3072402 /var/tmp/spdk.sock 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3072402 ']' 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:50.996 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.996 [2024-07-13 19:54:38.479630] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:50.996 [2024-07-13 19:54:38.479734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072402 ] 00:06:50.996 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.996 [2024-07-13 19:54:38.542887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.996 [2024-07-13 19:54:38.636462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.996 [2024-07-13 19:54:38.636518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.996 [2024-07-13 19:54:38.636522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3072422 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3072422 /var/tmp/spdk2.sock 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3072422 /var/tmp/spdk2.sock 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3072422 /var/tmp/spdk2.sock 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3072422 ']' 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.254 19:54:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.512 [2024-07-13 19:54:38.937394] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:51.512 [2024-07-13 19:54:38.937489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072422 ] 00:06:51.512 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.512 [2024-07-13 19:54:39.023272] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3072402 has claimed it. 00:06:51.512 [2024-07-13 19:54:39.023338] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3072422) - No such process 00:06:52.079 ERROR: process (pid: 3072422) is no longer running 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3072402 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3072402 ']' 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3072402 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3072402 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3072402' 00:06:52.079 killing process with pid 3072402 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3072402 00:06:52.079 19:54:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3072402 00:06:52.645 00:06:52.645 real 0m1.633s 00:06:52.645 user 0m4.391s 00:06:52.645 sys 0m0.450s 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.645 ************************************ 00:06:52.645 END TEST locking_overlapped_coremask 00:06:52.645 ************************************ 00:06:52.645 19:54:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:52.645 19:54:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:52.645 19:54:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:52.645 19:54:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.645 ************************************ 00:06:52.645 START TEST locking_overlapped_coremask_via_rpc 00:06:52.645 ************************************ 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3072586 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3072586 /var/tmp/spdk.sock 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3072586 ']' 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:52.645 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.645 [2024-07-13 19:54:40.160280] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:52.645 [2024-07-13 19:54:40.160360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072586 ] 00:06:52.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.645 [2024-07-13 19:54:40.222570] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.645 [2024-07-13 19:54:40.222614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.903 [2024-07-13 19:54:40.318714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.903 [2024-07-13 19:54:40.318766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.903 [2024-07-13 19:54:40.318784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3072706 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3072706 /var/tmp/spdk2.sock 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3072706 ']' 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.161 19:54:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.161 [2024-07-13 19:54:40.623540] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:53.161 [2024-07-13 19:54:40.623637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3072706 ] 00:06:53.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.161 [2024-07-13 19:54:40.710622] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.161 [2024-07-13 19:54:40.710656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.419 [2024-07-13 19:54:40.887509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.419 [2024-07-13 19:54:40.890932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.419 [2024-07-13 19:54:40.890935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.982 [2024-07-13 19:54:41.579964] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3072586 has claimed it. 00:06:53.982 request: 00:06:53.982 { 00:06:53.982 "method": "framework_enable_cpumask_locks", 00:06:53.982 "req_id": 1 00:06:53.982 } 00:06:53.982 Got JSON-RPC error response 00:06:53.982 response: 00:06:53.982 { 00:06:53.982 "code": -32603, 00:06:53.982 "message": "Failed to claim CPU core: 2" 00:06:53.982 } 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3072586 /var/tmp/spdk.sock 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3072586 ']' 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.982 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3072706 /var/tmp/spdk2.sock 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3072706 ']' 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:54.238 19:54:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.495 00:06:54.495 real 0m1.980s 00:06:54.495 user 0m1.032s 00:06:54.495 sys 0m0.196s 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.495 19:54:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.495 ************************************ 00:06:54.495 END TEST locking_overlapped_coremask_via_rpc 00:06:54.495 ************************************ 00:06:54.495 19:54:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.495 19:54:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3072586 ]] 00:06:54.495 19:54:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3072586 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3072586 ']' 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3072586 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3072586 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3072586' 00:06:54.495 killing process with pid 3072586 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3072586 00:06:54.495 19:54:42 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3072586 00:06:55.061 19:54:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3072706 ]] 00:06:55.061 19:54:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3072706 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3072706 ']' 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3072706 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3072706 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3072706' 00:06:55.061 killing process with pid 3072706 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3072706 00:06:55.061 19:54:42 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3072706 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3072586 ]] 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3072586 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3072586 ']' 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3072586 00:06:55.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3072586) - No such process 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3072586 is not found' 00:06:55.342 Process with pid 3072586 is not found 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3072706 ]] 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3072706 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3072706 ']' 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3072706 00:06:55.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3072706) - No such process 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3072706 is not found' 00:06:55.342 Process with pid 3072706 is not found 00:06:55.342 19:54:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.342 00:06:55.342 real 0m15.554s 00:06:55.342 user 0m27.262s 00:06:55.342 sys 0m5.285s 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.342 19:54:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.342 ************************************ 00:06:55.342 END TEST cpu_locks 00:06:55.342 ************************************ 00:06:55.600 00:06:55.600 real 0m41.667s 00:06:55.600 user 1m19.426s 00:06:55.600 sys 0m9.365s 00:06:55.600 19:54:43 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:55.600 19:54:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.600 ************************************ 00:06:55.600 END TEST event 00:06:55.600 ************************************ 00:06:55.600 19:54:43 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:55.600 19:54:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:55.600 19:54:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.600 19:54:43 -- common/autotest_common.sh@10 -- # set +x 00:06:55.600 ************************************ 00:06:55.600 START TEST thread 00:06:55.600 ************************************ 00:06:55.600 19:54:43 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:55.600 * Looking for test storage... 00:06:55.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:55.600 19:54:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.600 19:54:43 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:55.600 19:54:43 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.600 19:54:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.600 ************************************ 00:06:55.600 START TEST thread_poller_perf 00:06:55.600 ************************************ 00:06:55.600 19:54:43 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.600 [2024-07-13 19:54:43.127211] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:55.600 [2024-07-13 19:54:43.127289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073083 ] 00:06:55.600 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.600 [2024-07-13 19:54:43.190369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.857 [2024-07-13 19:54:43.280441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.857 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:56.791 ====================================== 00:06:56.791 busy:2712659580 (cyc) 00:06:56.791 total_run_count: 291000 00:06:56.791 tsc_hz: 2700000000 (cyc) 00:06:56.791 ====================================== 00:06:56.791 poller_cost: 9321 (cyc), 3452 (nsec) 00:06:56.791 00:06:56.791 real 0m1.256s 00:06:56.791 user 0m1.170s 00:06:56.791 sys 0m0.081s 00:06:56.791 19:54:44 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:56.791 19:54:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.791 ************************************ 00:06:56.791 END TEST thread_poller_perf 00:06:56.791 ************************************ 00:06:56.791 19:54:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.791 19:54:44 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:56.791 19:54:44 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:56.791 19:54:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.791 ************************************ 00:06:56.791 START TEST thread_poller_perf 00:06:56.791 ************************************ 00:06:56.791 19:54:44 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.791 [2024-07-13 19:54:44.435583] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:56.791 [2024-07-13 19:54:44.435650] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073237 ] 00:06:57.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.049 [2024-07-13 19:54:44.499801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.049 [2024-07-13 19:54:44.590036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.049 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:58.422 ====================================== 00:06:58.422 busy:2702655774 (cyc) 00:06:58.422 total_run_count: 3851000 00:06:58.422 tsc_hz: 2700000000 (cyc) 00:06:58.422 ====================================== 00:06:58.422 poller_cost: 701 (cyc), 259 (nsec) 00:06:58.422 00:06:58.422 real 0m1.254s 00:06:58.422 user 0m1.168s 00:06:58.422 sys 0m0.079s 00:06:58.422 19:54:45 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.422 19:54:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:58.422 ************************************ 00:06:58.422 END TEST thread_poller_perf 00:06:58.422 ************************************ 00:06:58.422 19:54:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.422 00:06:58.422 real 0m2.647s 00:06:58.422 user 0m2.395s 00:06:58.422 sys 0m0.251s 00:06:58.422 19:54:45 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.422 19:54:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.422 ************************************ 00:06:58.422 END TEST thread 00:06:58.422 ************************************ 00:06:58.422 19:54:45 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:58.422 19:54:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.422 19:54:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.422 19:54:45 -- common/autotest_common.sh@10 -- # set +x 00:06:58.422 ************************************ 00:06:58.422 START TEST accel 00:06:58.422 ************************************ 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:58.422 * Looking for test storage... 00:06:58.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:58.422 19:54:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:58.422 19:54:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:58.422 19:54:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.422 19:54:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3073430 00:06:58.422 19:54:45 accel -- accel/accel.sh@63 -- # waitforlisten 3073430 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@827 -- # '[' -z 3073430 ']' 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.422 19:54:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:58.422 19:54:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:58.422 19:54:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.422 19:54:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:58.422 19:54:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.422 19:54:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.422 19:54:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.422 19:54:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.422 19:54:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:58.422 19:54:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:58.422 [2024-07-13 19:54:45.851324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:58.422 [2024-07-13 19:54:45.851418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073430 ] 00:06:58.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.422 [2024-07-13 19:54:45.914133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.422 [2024-07-13 19:54:46.003441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@860 -- # return 0 00:06:58.689 19:54:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:58.689 19:54:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:58.689 19:54:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:58.689 19:54:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:58.689 19:54:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:58.689 19:54:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:58.689 19:54:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:58.689 19:54:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:58.689 19:54:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:58.689 19:54:46 accel -- accel/accel.sh@75 -- # killprocess 3073430 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@946 -- # '[' -z 3073430 ']' 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@950 -- # kill -0 3073430 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@951 -- # uname 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3073430 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3073430' 00:06:58.689 killing process with pid 3073430 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@965 -- # kill 3073430 00:06:58.689 19:54:46 accel -- common/autotest_common.sh@970 -- # wait 3073430 00:06:59.256 19:54:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:59.256 19:54:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:59.256 19:54:46 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:59.256 19:54:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.256 19:54:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 19:54:46 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:59.256 19:54:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:59.256 19:54:46 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.256 19:54:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 19:54:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:59.256 19:54:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:59.256 19:54:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.256 19:54:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.256 ************************************ 00:06:59.256 START TEST accel_missing_filename 00:06:59.256 ************************************ 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.256 19:54:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:59.256 19:54:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:59.256 [2024-07-13 19:54:46.838752] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.256 [2024-07-13 19:54:46.838805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073598 ] 00:06:59.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.256 [2024-07-13 19:54:46.899373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.515 [2024-07-13 19:54:46.994058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.515 [2024-07-13 19:54:47.054961] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:59.515 [2024-07-13 19:54:47.138998] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:59.803 A filename is required. 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.803 00:06:59.803 real 0m0.400s 00:06:59.803 user 0m0.290s 00:06:59.803 sys 0m0.141s 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:59.803 19:54:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:59.803 ************************************ 00:06:59.803 END TEST accel_missing_filename 00:06:59.803 ************************************ 00:06:59.803 19:54:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.803 19:54:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:59.803 19:54:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:59.803 19:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.803 ************************************ 00:06:59.803 START TEST accel_compress_verify 00:06:59.803 ************************************ 00:06:59.803 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.803 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:59.803 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.803 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:59.804 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.804 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:59.804 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.804 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:59.804 19:54:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:59.804 [2024-07-13 19:54:47.287080] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:06:59.804 [2024-07-13 19:54:47.287145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073740 ] 00:06:59.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.804 [2024-07-13 19:54:47.349616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.066 [2024-07-13 19:54:47.444664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.066 [2024-07-13 19:54:47.506680] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.066 [2024-07-13 19:54:47.586140] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:00.066 00:07:00.066 Compression does not support the verify option, aborting. 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.066 00:07:00.066 real 0m0.393s 00:07:00.066 user 0m0.280s 00:07:00.066 sys 0m0.145s 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.066 19:54:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 ************************************ 00:07:00.066 END TEST accel_compress_verify 00:07:00.066 ************************************ 00:07:00.066 19:54:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:00.066 19:54:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:00.066 19:54:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.066 19:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 ************************************ 00:07:00.066 START TEST accel_wrong_workload 00:07:00.066 ************************************ 00:07:00.066 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.067 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:00.067 19:54:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:00.326 Unsupported workload type: foobar 00:07:00.326 [2024-07-13 19:54:47.728833] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:00.326 accel_perf options: 00:07:00.326 [-h help message] 00:07:00.326 [-q queue depth per core] 00:07:00.326 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.326 [-T number of threads per core 00:07:00.326 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.326 [-t time in seconds] 00:07:00.326 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.326 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:00.326 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.326 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.326 [-S for crc32c workload, use this seed value (default 0) 00:07:00.326 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.326 [-f for fill workload, use this BYTE value (default 255) 00:07:00.326 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.326 [-y verify result if this switch is on] 00:07:00.326 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.326 Can be used to spread operations across a wider range of memory. 00:07:00.326 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:00.326 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.326 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.326 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.326 00:07:00.326 real 0m0.023s 00:07:00.326 user 0m0.011s 00:07:00.326 sys 0m0.012s 00:07:00.326 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.326 19:54:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:00.326 ************************************ 00:07:00.326 END TEST accel_wrong_workload 00:07:00.326 ************************************ 00:07:00.326 Error: writing output failed: Broken pipe 00:07:00.326 19:54:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.326 19:54:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:00.326 19:54:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.326 19:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.326 ************************************ 00:07:00.326 START TEST accel_negative_buffers 00:07:00.326 ************************************ 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:00.326 19:54:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:00.326 -x option must be non-negative. 00:07:00.326 [2024-07-13 19:54:47.794190] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:00.326 accel_perf options: 00:07:00.326 [-h help message] 00:07:00.326 [-q queue depth per core] 00:07:00.326 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.326 [-T number of threads per core 00:07:00.326 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.326 [-t time in seconds] 00:07:00.326 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.326 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:00.326 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.326 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.326 [-S for crc32c workload, use this seed value (default 0) 00:07:00.326 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.326 [-f for fill workload, use this BYTE value (default 255) 00:07:00.326 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.326 [-y verify result if this switch is on] 00:07:00.326 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.326 Can be used to spread operations across a wider range of memory. 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.326 00:07:00.326 real 0m0.022s 00:07:00.326 user 0m0.013s 00:07:00.326 sys 0m0.009s 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.326 19:54:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:00.326 ************************************ 00:07:00.326 END TEST accel_negative_buffers 00:07:00.326 ************************************ 00:07:00.326 Error: writing output failed: Broken pipe 00:07:00.326 19:54:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:00.326 19:54:47 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:00.326 19:54:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.326 19:54:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.326 ************************************ 00:07:00.326 START TEST accel_crc32c 00:07:00.326 ************************************ 00:07:00.326 19:54:47 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:00.326 19:54:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:00.326 [2024-07-13 19:54:47.860947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:00.326 [2024-07-13 19:54:47.861010] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3073811 ] 00:07:00.326 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.326 [2024-07-13 19:54:47.924372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.584 [2024-07-13 19:54:48.018632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.584 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.585 19:54:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.958 19:54:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.958 19:54:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.958 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.958 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.959 19:54:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.959 00:07:01.959 real 0m1.414s 00:07:01.959 user 0m1.270s 00:07:01.959 sys 0m0.147s 00:07:01.959 19:54:49 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.959 19:54:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:01.959 ************************************ 00:07:01.959 END TEST accel_crc32c 00:07:01.959 ************************************ 00:07:01.959 19:54:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:01.959 19:54:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:01.959 19:54:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.959 19:54:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.959 ************************************ 00:07:01.959 START TEST accel_crc32c_C2 00:07:01.959 ************************************ 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:01.959 [2024-07-13 19:54:49.319856] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:01.959 [2024-07-13 19:54:49.319997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074036 ] 00:07:01.959 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.959 [2024-07-13 19:54:49.384182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.959 [2024-07-13 19:54:49.478157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.959 19:54:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.334 00:07:03.334 real 0m1.417s 00:07:03.334 user 0m1.263s 00:07:03.334 sys 0m0.156s 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.334 19:54:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:03.334 ************************************ 00:07:03.334 END TEST accel_crc32c_C2 00:07:03.334 ************************************ 00:07:03.334 19:54:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:03.334 19:54:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:03.334 19:54:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.334 19:54:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.334 ************************************ 00:07:03.334 START TEST accel_copy 00:07:03.334 ************************************ 00:07:03.334 19:54:50 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:03.334 19:54:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:03.334 [2024-07-13 19:54:50.783502] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:03.334 [2024-07-13 19:54:50.783569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074236 ] 00:07:03.334 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.334 [2024-07-13 19:54:50.845050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.334 [2024-07-13 19:54:50.938542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.592 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.592 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.592 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.592 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.593 19:54:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:04.526 19:54:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.526 00:07:04.526 real 0m1.397s 00:07:04.526 user 0m1.262s 00:07:04.526 sys 0m0.135s 00:07:04.526 19:54:52 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:04.526 19:54:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:04.526 ************************************ 00:07:04.526 END TEST accel_copy 00:07:04.526 ************************************ 00:07:04.526 19:54:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.526 19:54:52 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:04.526 19:54:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:04.526 19:54:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.785 ************************************ 00:07:04.785 START TEST accel_fill 00:07:04.785 ************************************ 00:07:04.785 19:54:52 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:04.785 [2024-07-13 19:54:52.220036] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:04.785 [2024-07-13 19:54:52.220098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074403 ] 00:07:04.785 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.785 [2024-07-13 19:54:52.282999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.785 [2024-07-13 19:54:52.376143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.785 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:05.043 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.043 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.043 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.043 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.044 19:54:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:05.977 19:54:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.977 00:07:05.977 real 0m1.412s 00:07:05.977 user 0m1.268s 00:07:05.977 sys 0m0.146s 00:07:05.977 19:54:53 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.977 19:54:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:05.977 ************************************ 00:07:05.977 END TEST accel_fill 00:07:05.977 ************************************ 00:07:05.977 19:54:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:05.977 19:54:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:05.977 19:54:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.977 19:54:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.235 ************************************ 00:07:06.235 START TEST accel_copy_crc32c 00:07:06.235 ************************************ 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:06.235 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:06.235 [2024-07-13 19:54:53.678270] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:06.235 [2024-07-13 19:54:53.678345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074556 ] 00:07:06.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.235 [2024-07-13 19:54:53.739955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.235 [2024-07-13 19:54:53.831650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.236 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.493 19:54:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.425 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.426 00:07:07.426 real 0m1.388s 00:07:07.426 user 0m1.250s 00:07:07.426 sys 0m0.140s 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.426 19:54:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:07.426 ************************************ 00:07:07.426 END TEST accel_copy_crc32c 00:07:07.426 ************************************ 00:07:07.426 19:54:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.426 19:54:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:07.426 19:54:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.426 19:54:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.684 ************************************ 00:07:07.684 START TEST accel_copy_crc32c_C2 00:07:07.684 ************************************ 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:07.684 [2024-07-13 19:54:55.104315] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:07.684 [2024-07-13 19:54:55.104369] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074830 ] 00:07:07.684 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.684 [2024-07-13 19:54:55.164591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.684 [2024-07-13 19:54:55.257684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.684 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.685 19:54:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.059 00:07:09.059 real 0m1.406s 00:07:09.059 user 0m1.268s 00:07:09.059 sys 0m0.141s 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.059 19:54:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:09.059 ************************************ 00:07:09.059 END TEST accel_copy_crc32c_C2 00:07:09.059 ************************************ 00:07:09.059 19:54:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:09.059 19:54:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:09.059 19:54:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.059 19:54:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.059 ************************************ 00:07:09.059 START TEST accel_dualcast 00:07:09.059 ************************************ 00:07:09.059 19:54:56 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:09.059 19:54:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:09.059 [2024-07-13 19:54:56.553320] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:09.059 [2024-07-13 19:54:56.553376] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074981 ] 00:07:09.059 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.059 [2024-07-13 19:54:56.613634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.059 [2024-07-13 19:54:56.707944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:09.318 19:54:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:10.694 19:54:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.694 00:07:10.694 real 0m1.388s 00:07:10.694 user 0m1.248s 00:07:10.694 sys 0m0.141s 00:07:10.694 19:54:57 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:10.694 19:54:57 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:10.694 ************************************ 00:07:10.694 END TEST accel_dualcast 00:07:10.694 ************************************ 00:07:10.694 19:54:57 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:10.694 19:54:57 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:10.694 19:54:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:10.694 19:54:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.694 ************************************ 00:07:10.694 START TEST accel_compare 00:07:10.694 ************************************ 00:07:10.694 19:54:57 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:10.694 19:54:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:10.694 [2024-07-13 19:54:57.988633] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:10.694 [2024-07-13 19:54:57.988694] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075143 ] 00:07:10.694 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.694 [2024-07-13 19:54:58.051622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.694 [2024-07-13 19:54:58.143551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.694 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.695 19:54:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:12.070 19:54:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.070 00:07:12.070 real 0m1.410s 00:07:12.070 user 0m1.272s 00:07:12.070 sys 0m0.140s 00:07:12.070 19:54:59 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.070 19:54:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:12.070 ************************************ 00:07:12.070 END TEST accel_compare 00:07:12.070 ************************************ 00:07:12.070 19:54:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:12.070 19:54:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:12.070 19:54:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.070 19:54:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.070 ************************************ 00:07:12.070 START TEST accel_xor 00:07:12.070 ************************************ 00:07:12.070 19:54:59 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:12.070 [2024-07-13 19:54:59.443648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:12.070 [2024-07-13 19:54:59.443711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075301 ] 00:07:12.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.070 [2024-07-13 19:54:59.505150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.070 [2024-07-13 19:54:59.596647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.070 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.071 19:54:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.445 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.446 00:07:13.446 real 0m1.408s 00:07:13.446 user 0m1.268s 00:07:13.446 sys 0m0.142s 00:07:13.446 19:55:00 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:13.446 19:55:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:13.446 ************************************ 00:07:13.446 END TEST accel_xor 00:07:13.446 ************************************ 00:07:13.446 19:55:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:13.446 19:55:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:13.446 19:55:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:13.446 19:55:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.446 ************************************ 00:07:13.446 START TEST accel_xor 00:07:13.446 ************************************ 00:07:13.446 19:55:00 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:13.446 19:55:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:13.446 [2024-07-13 19:55:00.893182] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:13.446 [2024-07-13 19:55:00.893252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075569 ] 00:07:13.446 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.446 [2024-07-13 19:55:00.954596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.446 [2024-07-13 19:55:01.047459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:13.704 19:55:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:14.667 19:55:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.667 00:07:14.667 real 0m1.409s 00:07:14.667 user 0m1.273s 00:07:14.667 sys 0m0.138s 00:07:14.667 19:55:02 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.667 19:55:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:14.667 ************************************ 00:07:14.667 END TEST accel_xor 00:07:14.667 ************************************ 00:07:14.667 19:55:02 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:14.667 19:55:02 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:14.667 19:55:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.667 19:55:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.936 ************************************ 00:07:14.936 START TEST accel_dif_verify 00:07:14.936 ************************************ 00:07:14.936 19:55:02 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:14.936 [2024-07-13 19:55:02.343597] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:14.936 [2024-07-13 19:55:02.343660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075727 ] 00:07:14.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.936 [2024-07-13 19:55:02.405618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.936 [2024-07-13 19:55:02.500274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.936 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.937 19:55:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.307 19:55:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.307 19:55:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.307 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.307 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.307 19:55:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.307 19:55:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:16.308 19:55:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.308 00:07:16.308 real 0m1.391s 00:07:16.308 user 0m1.250s 00:07:16.308 sys 0m0.145s 00:07:16.308 19:55:03 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.308 19:55:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:16.308 ************************************ 00:07:16.308 END TEST accel_dif_verify 00:07:16.308 ************************************ 00:07:16.308 19:55:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:16.308 19:55:03 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:16.308 19:55:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.308 19:55:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.308 ************************************ 00:07:16.308 START TEST accel_dif_generate 00:07:16.308 ************************************ 00:07:16.308 19:55:03 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:16.308 19:55:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:16.308 [2024-07-13 19:55:03.780620] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:16.308 [2024-07-13 19:55:03.780678] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3075893 ] 00:07:16.308 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.308 [2024-07-13 19:55:03.842485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.308 [2024-07-13 19:55:03.933829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.566 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:16.567 19:55:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:17.938 19:55:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.938 00:07:17.938 real 0m1.408s 00:07:17.938 user 0m1.270s 00:07:17.938 sys 0m0.142s 00:07:17.938 19:55:05 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:17.938 19:55:05 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:17.938 ************************************ 00:07:17.938 END TEST accel_dif_generate 00:07:17.938 ************************************ 00:07:17.938 19:55:05 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:17.938 19:55:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:17.938 19:55:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:17.938 19:55:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.938 ************************************ 00:07:17.938 START TEST accel_dif_generate_copy 00:07:17.938 ************************************ 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.938 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:17.939 [2024-07-13 19:55:05.231938] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:17.939 [2024-07-13 19:55:05.232000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076115 ] 00:07:17.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.939 [2024-07-13 19:55:05.292355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.939 [2024-07-13 19:55:05.385540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.939 19:55:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.311 00:07:19.311 real 0m1.407s 00:07:19.311 user 0m1.266s 00:07:19.311 sys 0m0.142s 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.311 19:55:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.311 ************************************ 00:07:19.311 END TEST accel_dif_generate_copy 00:07:19.311 ************************************ 00:07:19.311 19:55:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:19.311 19:55:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.311 19:55:06 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:19.311 19:55:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:19.311 19:55:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.311 ************************************ 00:07:19.311 START TEST accel_comp 00:07:19.311 ************************************ 00:07:19.311 19:55:06 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:19.311 [2024-07-13 19:55:06.686585] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:19.311 [2024-07-13 19:55:06.686648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076318 ] 00:07:19.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.311 [2024-07-13 19:55:06.748095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.311 [2024-07-13 19:55:06.840838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.311 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:19.312 19:55:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:20.685 19:55:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.685 00:07:20.685 real 0m1.412s 00:07:20.685 user 0m1.268s 00:07:20.685 sys 0m0.148s 00:07:20.685 19:55:08 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.685 19:55:08 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:20.685 ************************************ 00:07:20.685 END TEST accel_comp 00:07:20.685 ************************************ 00:07:20.685 19:55:08 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.685 19:55:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:20.685 19:55:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.685 19:55:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.685 ************************************ 00:07:20.685 START TEST accel_decomp 00:07:20.685 ************************************ 00:07:20.685 19:55:08 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:20.685 19:55:08 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:20.685 [2024-07-13 19:55:08.148677] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:20.685 [2024-07-13 19:55:08.148742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076471 ] 00:07:20.685 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.686 [2024-07-13 19:55:08.212403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.686 [2024-07-13 19:55:08.304157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:20.944 19:55:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.880 19:55:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.880 00:07:21.880 real 0m1.399s 00:07:21.880 user 0m1.262s 00:07:21.880 sys 0m0.140s 00:07:21.880 19:55:09 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.880 19:55:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:21.880 ************************************ 00:07:21.880 END TEST accel_decomp 00:07:21.880 ************************************ 00:07:22.140 19:55:09 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.140 19:55:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:22.140 19:55:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.140 19:55:09 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.140 ************************************ 00:07:22.140 START TEST accel_decmop_full 00:07:22.140 ************************************ 00:07:22.140 19:55:09 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:22.140 19:55:09 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:22.140 [2024-07-13 19:55:09.593738] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:22.140 [2024-07-13 19:55:09.593796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076638 ] 00:07:22.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.140 [2024-07-13 19:55:09.656269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.140 [2024-07-13 19:55:09.747151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:22.400 19:55:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:23.332 19:55:10 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.332 00:07:23.332 real 0m1.410s 00:07:23.332 user 0m1.273s 00:07:23.332 sys 0m0.139s 00:07:23.332 19:55:10 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.332 19:55:10 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:23.332 ************************************ 00:07:23.332 END TEST accel_decmop_full 00:07:23.332 ************************************ 00:07:23.591 19:55:11 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.591 19:55:11 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:23.591 19:55:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.591 19:55:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.591 ************************************ 00:07:23.591 START TEST accel_decomp_mcore 00:07:23.591 ************************************ 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:23.591 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:23.591 [2024-07-13 19:55:11.053509] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:23.591 [2024-07-13 19:55:11.053571] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076904 ] 00:07:23.591 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.591 [2024-07-13 19:55:11.117270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.591 [2024-07-13 19:55:11.211917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.591 [2024-07-13 19:55:11.211986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.591 [2024-07-13 19:55:11.212086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.591 [2024-07-13 19:55:11.212088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.849 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:23.850 19:55:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.226 00:07:25.226 real 0m1.419s 00:07:25.226 user 0m4.721s 00:07:25.226 sys 0m0.155s 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.226 19:55:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:25.226 ************************************ 00:07:25.226 END TEST accel_decomp_mcore 00:07:25.226 ************************************ 00:07:25.226 19:55:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.226 19:55:12 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:25.226 19:55:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.226 19:55:12 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.226 ************************************ 00:07:25.226 START TEST accel_decomp_full_mcore 00:07:25.226 ************************************ 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:25.226 [2024-07-13 19:55:12.518450] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:25.226 [2024-07-13 19:55:12.518503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077066 ] 00:07:25.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.226 [2024-07-13 19:55:12.578669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.226 [2024-07-13 19:55:12.675467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.226 [2024-07-13 19:55:12.675533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.226 [2024-07-13 19:55:12.675624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.226 [2024-07-13 19:55:12.675621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.226 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:25.227 19:55:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.599 00:07:26.599 real 0m1.425s 00:07:26.599 user 0m4.766s 00:07:26.599 sys 0m0.153s 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.599 19:55:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:26.599 ************************************ 00:07:26.599 END TEST accel_decomp_full_mcore 00:07:26.599 ************************************ 00:07:26.599 19:55:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.599 19:55:13 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:26.599 19:55:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.599 19:55:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.599 ************************************ 00:07:26.599 START TEST accel_decomp_mthread 00:07:26.599 ************************************ 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:26.599 19:55:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:26.599 [2024-07-13 19:55:13.992251] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:26.599 [2024-07-13 19:55:13.992301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077228 ] 00:07:26.599 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.599 [2024-07-13 19:55:14.052044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.599 [2024-07-13 19:55:14.142603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.599 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:26.600 19:55:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.974 00:07:27.974 real 0m1.405s 00:07:27.974 user 0m1.263s 00:07:27.974 sys 0m0.145s 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.974 19:55:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.974 ************************************ 00:07:27.974 END TEST accel_decomp_mthread 00:07:27.974 ************************************ 00:07:27.974 19:55:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.974 19:55:15 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:27.974 19:55:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.974 19:55:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.974 ************************************ 00:07:27.974 START TEST accel_decomp_full_mthread 00:07:27.974 ************************************ 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:27.974 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:27.974 [2024-07-13 19:55:15.441637] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:27.974 [2024-07-13 19:55:15.441690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077417 ] 00:07:27.974 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.974 [2024-07-13 19:55:15.501719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.974 [2024-07-13 19:55:15.594462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:28.233 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:28.234 19:55:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.634 00:07:29.634 real 0m1.439s 00:07:29.634 user 0m1.294s 00:07:29.634 sys 0m0.147s 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.634 19:55:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:29.634 ************************************ 00:07:29.634 END TEST accel_decomp_full_mthread 00:07:29.634 ************************************ 00:07:29.634 19:55:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:29.634 19:55:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:29.634 19:55:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:29.634 19:55:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.634 19:55:16 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:29.634 19:55:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.634 19:55:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.634 19:55:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.634 19:55:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.634 19:55:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.634 19:55:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.634 19:55:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:29.634 19:55:16 accel -- accel/accel.sh@41 -- # jq -r . 00:07:29.634 ************************************ 00:07:29.634 START TEST accel_dif_functional_tests 00:07:29.634 ************************************ 00:07:29.634 19:55:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:29.634 [2024-07-13 19:55:16.945922] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:29.634 [2024-07-13 19:55:16.945985] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077656 ] 00:07:29.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.634 [2024-07-13 19:55:17.005113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.634 [2024-07-13 19:55:17.100506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.634 [2024-07-13 19:55:17.100574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.634 [2024-07-13 19:55:17.100576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.634 00:07:29.634 00:07:29.634 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.634 http://cunit.sourceforge.net/ 00:07:29.634 00:07:29.634 00:07:29.634 Suite: accel_dif 00:07:29.634 Test: verify: DIF generated, GUARD check ...passed 00:07:29.634 Test: verify: DIF generated, APPTAG check ...passed 00:07:29.634 Test: verify: DIF generated, REFTAG check ...passed 00:07:29.634 Test: verify: DIF not generated, GUARD check ...[2024-07-13 19:55:17.193471] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:29.634 passed 00:07:29.634 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 19:55:17.193543] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:29.634 passed 00:07:29.634 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 19:55:17.193573] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:29.634 passed 00:07:29.634 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:29.634 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 19:55:17.193635] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:29.634 passed 00:07:29.634 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:29.634 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:29.634 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:29.634 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 19:55:17.193761] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:29.634 passed 00:07:29.634 Test: verify copy: DIF generated, GUARD check ...passed 00:07:29.634 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:29.634 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:29.634 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 19:55:17.193935] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:29.634 passed 00:07:29.634 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 19:55:17.193973] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:29.634 passed 00:07:29.634 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 19:55:17.194006] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:29.634 passed 00:07:29.634 Test: generate copy: DIF generated, GUARD check ...passed 00:07:29.634 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:29.634 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:29.634 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:29.634 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:29.634 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:29.634 Test: generate copy: iovecs-len validate ...[2024-07-13 19:55:17.194235] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:29.634 passed 00:07:29.634 Test: generate copy: buffer alignment validate ...passed 00:07:29.634 00:07:29.634 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.634 suites 1 1 n/a 0 0 00:07:29.634 tests 26 26 26 0 0 00:07:29.634 asserts 115 115 115 0 n/a 00:07:29.634 00:07:29.634 Elapsed time = 0.002 seconds 00:07:29.891 00:07:29.891 real 0m0.497s 00:07:29.891 user 0m0.776s 00:07:29.891 sys 0m0.178s 00:07:29.891 19:55:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.891 19:55:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:29.891 ************************************ 00:07:29.891 END TEST accel_dif_functional_tests 00:07:29.891 ************************************ 00:07:29.891 00:07:29.891 real 0m31.677s 00:07:29.891 user 0m35.113s 00:07:29.891 sys 0m4.564s 00:07:29.891 19:55:17 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.891 19:55:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.891 ************************************ 00:07:29.891 END TEST accel 00:07:29.891 ************************************ 00:07:29.891 19:55:17 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:29.891 19:55:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:29.891 19:55:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.891 19:55:17 -- common/autotest_common.sh@10 -- # set +x 00:07:29.891 ************************************ 00:07:29.891 START TEST accel_rpc 00:07:29.891 ************************************ 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:29.891 * Looking for test storage... 00:07:29.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:29.891 19:55:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.891 19:55:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3077732 00:07:29.891 19:55:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:29.891 19:55:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3077732 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3077732 ']' 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:29.891 19:55:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.151 [2024-07-13 19:55:17.560342] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:30.151 [2024-07-13 19:55:17.560416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077732 ] 00:07:30.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.151 [2024-07-13 19:55:17.621044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.151 [2024-07-13 19:55:17.713974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.151 19:55:17 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:30.151 19:55:17 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:30.151 19:55:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:30.151 19:55:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:30.151 19:55:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:30.151 19:55:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:30.151 19:55:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:30.151 19:55:17 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.151 19:55:17 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.151 19:55:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.151 ************************************ 00:07:30.151 START TEST accel_assign_opcode 00:07:30.151 ************************************ 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.151 [2024-07-13 19:55:17.782606] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.151 [2024-07-13 19:55:17.790611] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.151 19:55:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.452 software 00:07:30.452 00:07:30.452 real 0m0.296s 00:07:30.452 user 0m0.043s 00:07:30.452 sys 0m0.003s 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.452 19:55:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:30.452 ************************************ 00:07:30.452 END TEST accel_assign_opcode 00:07:30.452 ************************************ 00:07:30.710 19:55:18 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3077732 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3077732 ']' 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3077732 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3077732 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3077732' 00:07:30.710 killing process with pid 3077732 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@965 -- # kill 3077732 00:07:30.710 19:55:18 accel_rpc -- common/autotest_common.sh@970 -- # wait 3077732 00:07:30.968 00:07:30.968 real 0m1.073s 00:07:30.968 user 0m1.009s 00:07:30.968 sys 0m0.409s 00:07:30.968 19:55:18 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.968 19:55:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.968 ************************************ 00:07:30.968 END TEST accel_rpc 00:07:30.968 ************************************ 00:07:30.968 19:55:18 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:30.968 19:55:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.968 19:55:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.968 19:55:18 -- common/autotest_common.sh@10 -- # set +x 00:07:30.968 ************************************ 00:07:30.968 START TEST app_cmdline 00:07:30.968 ************************************ 00:07:30.968 19:55:18 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:31.226 * Looking for test storage... 00:07:31.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:31.226 19:55:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:31.226 19:55:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3077940 00:07:31.226 19:55:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:31.226 19:55:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3077940 00:07:31.226 19:55:18 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3077940 ']' 00:07:31.226 19:55:18 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.226 19:55:18 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:31.226 19:55:18 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.226 19:55:18 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:31.226 19:55:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.226 [2024-07-13 19:55:18.690463] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:31.226 [2024-07-13 19:55:18.690539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077940 ] 00:07:31.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.226 [2024-07-13 19:55:18.747047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.226 [2024-07-13 19:55:18.832485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.483 19:55:19 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:31.483 19:55:19 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:07:31.483 19:55:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:31.741 { 00:07:31.741 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:07:31.741 "fields": { 00:07:31.741 "major": 24, 00:07:31.741 "minor": 5, 00:07:31.741 "patch": 1, 00:07:31.741 "suffix": "-pre", 00:07:31.741 "commit": "5fa2f5086" 00:07:31.741 } 00:07:31.741 } 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:31.741 19:55:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:31.741 19:55:19 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.999 request: 00:07:31.999 { 00:07:31.999 "method": "env_dpdk_get_mem_stats", 00:07:31.999 "req_id": 1 00:07:31.999 } 00:07:31.999 Got JSON-RPC error response 00:07:31.999 response: 00:07:31.999 { 00:07:31.999 "code": -32601, 00:07:31.999 "message": "Method not found" 00:07:31.999 } 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.999 19:55:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3077940 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3077940 ']' 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3077940 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3077940 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3077940' 00:07:31.999 killing process with pid 3077940 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@965 -- # kill 3077940 00:07:31.999 19:55:19 app_cmdline -- common/autotest_common.sh@970 -- # wait 3077940 00:07:32.565 00:07:32.565 real 0m1.441s 00:07:32.565 user 0m1.773s 00:07:32.565 sys 0m0.446s 00:07:32.565 19:55:20 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.565 19:55:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 ************************************ 00:07:32.565 END TEST app_cmdline 00:07:32.565 ************************************ 00:07:32.565 19:55:20 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:32.565 19:55:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.565 19:55:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.565 19:55:20 -- common/autotest_common.sh@10 -- # set +x 00:07:32.565 ************************************ 00:07:32.565 START TEST version 00:07:32.565 ************************************ 00:07:32.565 19:55:20 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:32.565 * Looking for test storage... 00:07:32.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:32.565 19:55:20 version -- app/version.sh@17 -- # get_header_version major 00:07:32.565 19:55:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.565 19:55:20 version -- app/version.sh@14 -- # cut -f2 00:07:32.565 19:55:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.565 19:55:20 version -- app/version.sh@17 -- # major=24 00:07:32.565 19:55:20 version -- app/version.sh@18 -- # get_header_version minor 00:07:32.565 19:55:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.565 19:55:20 version -- app/version.sh@14 -- # cut -f2 00:07:32.565 19:55:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.565 19:55:20 version -- app/version.sh@18 -- # minor=5 00:07:32.565 19:55:20 version -- app/version.sh@19 -- # get_header_version patch 00:07:32.565 19:55:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.565 19:55:20 version -- app/version.sh@14 -- # cut -f2 00:07:32.565 19:55:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.565 19:55:20 version -- app/version.sh@19 -- # patch=1 00:07:32.566 19:55:20 version -- app/version.sh@20 -- # get_header_version suffix 00:07:32.566 19:55:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:32.566 19:55:20 version -- app/version.sh@14 -- # cut -f2 00:07:32.566 19:55:20 version -- app/version.sh@14 -- # tr -d '"' 00:07:32.566 19:55:20 version -- app/version.sh@20 -- # suffix=-pre 00:07:32.566 19:55:20 version -- app/version.sh@22 -- # version=24.5 00:07:32.566 19:55:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:32.566 19:55:20 version -- app/version.sh@25 -- # version=24.5.1 00:07:32.566 19:55:20 version -- app/version.sh@28 -- # version=24.5.1rc0 00:07:32.566 19:55:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:32.566 19:55:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:32.566 19:55:20 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:07:32.566 19:55:20 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:07:32.566 00:07:32.566 real 0m0.107s 00:07:32.566 user 0m0.055s 00:07:32.566 sys 0m0.074s 00:07:32.566 19:55:20 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.566 19:55:20 version -- common/autotest_common.sh@10 -- # set +x 00:07:32.566 ************************************ 00:07:32.566 END TEST version 00:07:32.566 ************************************ 00:07:32.566 19:55:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:32.566 19:55:20 -- spdk/autotest.sh@198 -- # uname -s 00:07:32.566 19:55:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:32.566 19:55:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:32.566 19:55:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:32.566 19:55:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:32.566 19:55:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:32.566 19:55:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:32.566 19:55:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.566 19:55:20 -- common/autotest_common.sh@10 -- # set +x 00:07:32.825 19:55:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:32.825 19:55:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:32.825 19:55:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:32.825 19:55:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:32.825 19:55:20 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:32.825 19:55:20 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:32.825 19:55:20 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:32.825 19:55:20 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:32.825 19:55:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.825 19:55:20 -- common/autotest_common.sh@10 -- # set +x 00:07:32.825 ************************************ 00:07:32.825 START TEST nvmf_tcp 00:07:32.825 ************************************ 00:07:32.825 19:55:20 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:32.825 * Looking for test storage... 00:07:32.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:32.825 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:32.825 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.826 19:55:20 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.826 19:55:20 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.826 19:55:20 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.826 19:55:20 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:32.826 19:55:20 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:32.826 19:55:20 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:32.826 19:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:32.826 19:55:20 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:32.826 19:55:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:32.826 19:55:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.826 19:55:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:32.826 ************************************ 00:07:32.826 START TEST nvmf_example 00:07:32.826 ************************************ 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:32.826 * Looking for test storage... 00:07:32.826 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:32.826 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:32.827 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:32.827 19:55:20 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:32.827 19:55:20 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:34.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:34.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.727 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:34.728 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:34.728 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:34.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:07:34.728 00:07:34.728 --- 10.0.0.2 ping statistics --- 00:07:34.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.728 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:07:34.728 00:07:34.728 --- 10.0.0.1 ping statistics --- 00:07:34.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.728 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3079954 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3079954 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3079954 ']' 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:34.728 19:55:22 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.986 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:35.917 19:55:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:35.917 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.114 Initializing NVMe Controllers 00:07:48.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:48.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:48.114 Initialization complete. Launching workers. 00:07:48.114 ======================================================== 00:07:48.114 Latency(us) 00:07:48.114 Device Information : IOPS MiB/s Average min max 00:07:48.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14882.47 58.13 4300.18 849.76 16401.03 00:07:48.114 ======================================================== 00:07:48.114 Total : 14882.47 58.13 4300.18 849.76 16401.03 00:07:48.114 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.114 rmmod nvme_tcp 00:07:48.114 rmmod nvme_fabrics 00:07:48.114 rmmod nvme_keyring 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3079954 ']' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3079954 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3079954 ']' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3079954 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3079954 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3079954' 00:07:48.114 killing process with pid 3079954 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3079954 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3079954 00:07:48.114 nvmf threads initialize successfully 00:07:48.114 bdev subsystem init successfully 00:07:48.114 created a nvmf target service 00:07:48.114 create targets's poll groups done 00:07:48.114 all subsystems of target started 00:07:48.114 nvmf target is running 00:07:48.114 all subsystems of target stopped 00:07:48.114 destroy targets's poll groups done 00:07:48.114 destroyed the nvmf target service 00:07:48.114 bdev subsystem finish successfully 00:07:48.114 nvmf threads destroy successfully 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.114 19:55:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.372 19:55:35 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.372 19:55:35 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:48.372 19:55:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.372 19:55:35 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.372 00:07:48.372 real 0m15.665s 00:07:48.372 user 0m44.910s 00:07:48.372 sys 0m3.099s 00:07:48.372 19:55:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:48.372 19:55:36 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:48.372 ************************************ 00:07:48.372 END TEST nvmf_example 00:07:48.372 ************************************ 00:07:48.372 19:55:36 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:48.372 19:55:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:48.372 19:55:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.372 19:55:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:48.632 ************************************ 00:07:48.632 START TEST nvmf_filesystem 00:07:48.632 ************************************ 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:48.632 * Looking for test storage... 00:07:48.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:48.632 #define SPDK_CONFIG_H 00:07:48.632 #define SPDK_CONFIG_APPS 1 00:07:48.632 #define SPDK_CONFIG_ARCH native 00:07:48.632 #undef SPDK_CONFIG_ASAN 00:07:48.632 #undef SPDK_CONFIG_AVAHI 00:07:48.632 #undef SPDK_CONFIG_CET 00:07:48.632 #define SPDK_CONFIG_COVERAGE 1 00:07:48.632 #define SPDK_CONFIG_CROSS_PREFIX 00:07:48.632 #undef SPDK_CONFIG_CRYPTO 00:07:48.632 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:48.632 #undef SPDK_CONFIG_CUSTOMOCF 00:07:48.632 #undef SPDK_CONFIG_DAOS 00:07:48.632 #define SPDK_CONFIG_DAOS_DIR 00:07:48.632 #define SPDK_CONFIG_DEBUG 1 00:07:48.632 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:48.632 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:48.632 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:48.632 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:48.632 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:48.632 #undef SPDK_CONFIG_DPDK_UADK 00:07:48.632 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:48.632 #define SPDK_CONFIG_EXAMPLES 1 00:07:48.632 #undef SPDK_CONFIG_FC 00:07:48.632 #define SPDK_CONFIG_FC_PATH 00:07:48.632 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:48.632 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:48.632 #undef SPDK_CONFIG_FUSE 00:07:48.632 #undef SPDK_CONFIG_FUZZER 00:07:48.632 #define SPDK_CONFIG_FUZZER_LIB 00:07:48.632 #undef SPDK_CONFIG_GOLANG 00:07:48.632 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:48.632 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:48.632 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:48.632 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:48.632 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:48.632 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:48.632 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:48.632 #define SPDK_CONFIG_IDXD 1 00:07:48.632 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:48.632 #undef SPDK_CONFIG_IPSEC_MB 00:07:48.632 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:48.632 #define SPDK_CONFIG_ISAL 1 00:07:48.632 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:48.632 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:48.632 #define SPDK_CONFIG_LIBDIR 00:07:48.632 #undef SPDK_CONFIG_LTO 00:07:48.632 #define SPDK_CONFIG_MAX_LCORES 00:07:48.632 #define SPDK_CONFIG_NVME_CUSE 1 00:07:48.632 #undef SPDK_CONFIG_OCF 00:07:48.632 #define SPDK_CONFIG_OCF_PATH 00:07:48.632 #define SPDK_CONFIG_OPENSSL_PATH 00:07:48.632 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:48.632 #define SPDK_CONFIG_PGO_DIR 00:07:48.632 #undef SPDK_CONFIG_PGO_USE 00:07:48.632 #define SPDK_CONFIG_PREFIX /usr/local 00:07:48.632 #undef SPDK_CONFIG_RAID5F 00:07:48.632 #undef SPDK_CONFIG_RBD 00:07:48.632 #define SPDK_CONFIG_RDMA 1 00:07:48.632 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:48.632 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:48.632 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:48.632 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:48.632 #define SPDK_CONFIG_SHARED 1 00:07:48.632 #undef SPDK_CONFIG_SMA 00:07:48.632 #define SPDK_CONFIG_TESTS 1 00:07:48.632 #undef SPDK_CONFIG_TSAN 00:07:48.632 #define SPDK_CONFIG_UBLK 1 00:07:48.632 #define SPDK_CONFIG_UBSAN 1 00:07:48.632 #undef SPDK_CONFIG_UNIT_TESTS 00:07:48.632 #undef SPDK_CONFIG_URING 00:07:48.632 #define SPDK_CONFIG_URING_PATH 00:07:48.632 #undef SPDK_CONFIG_URING_ZNS 00:07:48.632 #undef SPDK_CONFIG_USDT 00:07:48.632 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:48.632 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:48.632 #define SPDK_CONFIG_VFIO_USER 1 00:07:48.632 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:48.632 #define SPDK_CONFIG_VHOST 1 00:07:48.632 #define SPDK_CONFIG_VIRTIO 1 00:07:48.632 #undef SPDK_CONFIG_VTUNE 00:07:48.632 #define SPDK_CONFIG_VTUNE_DIR 00:07:48.632 #define SPDK_CONFIG_WERROR 1 00:07:48.632 #define SPDK_CONFIG_WPDK_DIR 00:07:48.632 #undef SPDK_CONFIG_XNVME 00:07:48.632 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.632 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : v22.11.4 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3081659 ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3081659 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.6o3o38 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.6o3o38/tests/target /tmp/spdk.6o3o38 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.633 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=953643008 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4330786816 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=53537300480 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994708992 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8457408512 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941716480 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997352448 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12390182912 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398944256 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8761344 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30996303872 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997356544 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1052672 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199463936 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199468032 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:48.634 * Looking for test storage... 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=53537300480 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=10672001024 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.634 19:55:36 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.535 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:50.536 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:50.536 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:50.536 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:50.536 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.536 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:07:50.795 00:07:50.795 --- 10.0.0.2 ping statistics --- 00:07:50.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.795 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:07:50.795 00:07:50.795 --- 10.0.0.1 ping statistics --- 00:07:50.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.795 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 ************************************ 00:07:50.795 START TEST nvmf_filesystem_no_in_capsule 00:07:50.795 ************************************ 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3083284 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3083284 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3083284 ']' 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:50.795 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 [2024-07-13 19:55:38.401823] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:07:50.795 [2024-07-13 19:55:38.401919] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.054 [2024-07-13 19:55:38.468218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.054 [2024-07-13 19:55:38.559845] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.054 [2024-07-13 19:55:38.559936] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.054 [2024-07-13 19:55:38.559951] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.054 [2024-07-13 19:55:38.559962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.054 [2024-07-13 19:55:38.559972] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.054 [2024-07-13 19:55:38.560022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.054 [2024-07-13 19:55:38.560076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.054 [2024-07-13 19:55:38.560144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.054 [2024-07-13 19:55:38.560146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.054 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:51.054 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:51.054 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.054 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:51.054 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.313 [2024-07-13 19:55:38.717709] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.313 Malloc1 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.313 [2024-07-13 19:55:38.902421] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:51.313 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:51.314 { 00:07:51.314 "name": "Malloc1", 00:07:51.314 "aliases": [ 00:07:51.314 "9bdfc1f6-fa29-467a-b1e3-7ea099a0a2dc" 00:07:51.314 ], 00:07:51.314 "product_name": "Malloc disk", 00:07:51.314 "block_size": 512, 00:07:51.314 "num_blocks": 1048576, 00:07:51.314 "uuid": "9bdfc1f6-fa29-467a-b1e3-7ea099a0a2dc", 00:07:51.314 "assigned_rate_limits": { 00:07:51.314 "rw_ios_per_sec": 0, 00:07:51.314 "rw_mbytes_per_sec": 0, 00:07:51.314 "r_mbytes_per_sec": 0, 00:07:51.314 "w_mbytes_per_sec": 0 00:07:51.314 }, 00:07:51.314 "claimed": true, 00:07:51.314 "claim_type": "exclusive_write", 00:07:51.314 "zoned": false, 00:07:51.314 "supported_io_types": { 00:07:51.314 "read": true, 00:07:51.314 "write": true, 00:07:51.314 "unmap": true, 00:07:51.314 "write_zeroes": true, 00:07:51.314 "flush": true, 00:07:51.314 "reset": true, 00:07:51.314 "compare": false, 00:07:51.314 "compare_and_write": false, 00:07:51.314 "abort": true, 00:07:51.314 "nvme_admin": false, 00:07:51.314 "nvme_io": false 00:07:51.314 }, 00:07:51.314 "memory_domains": [ 00:07:51.314 { 00:07:51.314 "dma_device_id": "system", 00:07:51.314 "dma_device_type": 1 00:07:51.314 }, 00:07:51.314 { 00:07:51.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.314 "dma_device_type": 2 00:07:51.314 } 00:07:51.314 ], 00:07:51.314 "driver_specific": {} 00:07:51.314 } 00:07:51.314 ]' 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:51.314 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:51.572 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:51.572 19:55:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:51.572 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:51.572 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:51.572 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:52.137 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.137 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:52.137 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.137 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:52.137 19:55:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:54.100 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:54.358 19:55:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:55.292 19:55:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.223 ************************************ 00:07:56.223 START TEST filesystem_ext4 00:07:56.223 ************************************ 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:56.223 19:55:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:56.223 mke2fs 1.46.5 (30-Dec-2021) 00:07:56.223 Discarding device blocks: 0/522240 done 00:07:56.223 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:56.223 Filesystem UUID: 2208105d-e4a3-4bfd-a13c-9c7ad8e5ca89 00:07:56.223 Superblock backups stored on blocks: 00:07:56.223 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:56.223 00:07:56.223 Allocating group tables: 0/64 done 00:07:56.224 Writing inode tables: 0/64 done 00:07:56.487 Creating journal (8192 blocks): done 00:07:57.420 Writing superblocks and filesystem accounting information: 0/64 done 00:07:57.420 00:07:57.420 19:55:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:57.420 19:55:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3083284 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.354 00:07:58.354 real 0m2.212s 00:07:58.354 user 0m0.020s 00:07:58.354 sys 0m0.059s 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:58.354 ************************************ 00:07:58.354 END TEST filesystem_ext4 00:07:58.354 ************************************ 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.354 ************************************ 00:07:58.354 START TEST filesystem_btrfs 00:07:58.354 ************************************ 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:58.354 19:55:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:58.920 btrfs-progs v6.6.2 00:07:58.920 See https://btrfs.readthedocs.io for more information. 00:07:58.920 00:07:58.920 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:58.920 NOTE: several default settings have changed in version 5.15, please make sure 00:07:58.920 this does not affect your deployments: 00:07:58.920 - DUP for metadata (-m dup) 00:07:58.920 - enabled no-holes (-O no-holes) 00:07:58.920 - enabled free-space-tree (-R free-space-tree) 00:07:58.920 00:07:58.920 Label: (null) 00:07:58.920 UUID: 4cd08ab1-190d-4476-986a-0c5bdb541532 00:07:58.920 Node size: 16384 00:07:58.920 Sector size: 4096 00:07:58.920 Filesystem size: 510.00MiB 00:07:58.920 Block group profiles: 00:07:58.920 Data: single 8.00MiB 00:07:58.920 Metadata: DUP 32.00MiB 00:07:58.920 System: DUP 8.00MiB 00:07:58.920 SSD detected: yes 00:07:58.920 Zoned device: no 00:07:58.920 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:58.920 Runtime features: free-space-tree 00:07:58.920 Checksum: crc32c 00:07:58.920 Number of devices: 1 00:07:58.920 Devices: 00:07:58.920 ID SIZE PATH 00:07:58.920 1 510.00MiB /dev/nvme0n1p1 00:07:58.920 00:07:58.920 19:55:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:58.920 19:55:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3083284 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:59.854 00:07:59.854 real 0m1.315s 00:07:59.854 user 0m0.024s 00:07:59.854 sys 0m0.105s 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:59.854 ************************************ 00:07:59.854 END TEST filesystem_btrfs 00:07:59.854 ************************************ 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.854 ************************************ 00:07:59.854 START TEST filesystem_xfs 00:07:59.854 ************************************ 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:59.854 19:55:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:59.854 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:59.854 = sectsz=512 attr=2, projid32bit=1 00:07:59.854 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:59.854 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:59.854 data = bsize=4096 blocks=130560, imaxpct=25 00:07:59.855 = sunit=0 swidth=0 blks 00:07:59.855 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:59.855 log =internal log bsize=4096 blocks=16384, version=2 00:07:59.855 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:59.855 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:00.787 Discarding blocks...Done. 00:08:00.787 19:55:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:00.787 19:55:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.685 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.685 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.685 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.685 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3083284 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.943 00:08:02.943 real 0m3.085s 00:08:02.943 user 0m0.020s 00:08:02.943 sys 0m0.062s 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.943 ************************************ 00:08:02.943 END TEST filesystem_xfs 00:08:02.943 ************************************ 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3083284 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3083284 ']' 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3083284 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3083284 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3083284' 00:08:02.943 killing process with pid 3083284 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3083284 00:08:02.943 19:55:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3083284 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:03.510 00:08:03.510 real 0m12.681s 00:08:03.510 user 0m48.802s 00:08:03.510 sys 0m1.768s 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 ************************************ 00:08:03.510 END TEST nvmf_filesystem_no_in_capsule 00:08:03.510 ************************************ 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 ************************************ 00:08:03.510 START TEST nvmf_filesystem_in_capsule 00:08:03.510 ************************************ 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3084980 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3084980 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3084980 ']' 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:03.510 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.510 [2024-07-13 19:55:51.139566] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:03.510 [2024-07-13 19:55:51.139647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.768 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.768 [2024-07-13 19:55:51.205107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.768 [2024-07-13 19:55:51.290323] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.768 [2024-07-13 19:55:51.290374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.768 [2024-07-13 19:55:51.290402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.768 [2024-07-13 19:55:51.290413] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.768 [2024-07-13 19:55:51.290423] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.768 [2024-07-13 19:55:51.290506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.768 [2024-07-13 19:55:51.290536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.768 [2024-07-13 19:55:51.290594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.768 [2024-07-13 19:55:51.290596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.768 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.768 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:08:03.768 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.768 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.768 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 [2024-07-13 19:55:51.443718] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 [2024-07-13 19:55:51.630230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:08:04.026 { 00:08:04.026 "name": "Malloc1", 00:08:04.026 "aliases": [ 00:08:04.026 "150ee4d2-314a-4f11-a9a5-145296e03d19" 00:08:04.026 ], 00:08:04.026 "product_name": "Malloc disk", 00:08:04.026 "block_size": 512, 00:08:04.026 "num_blocks": 1048576, 00:08:04.026 "uuid": "150ee4d2-314a-4f11-a9a5-145296e03d19", 00:08:04.026 "assigned_rate_limits": { 00:08:04.026 "rw_ios_per_sec": 0, 00:08:04.026 "rw_mbytes_per_sec": 0, 00:08:04.026 "r_mbytes_per_sec": 0, 00:08:04.026 "w_mbytes_per_sec": 0 00:08:04.026 }, 00:08:04.026 "claimed": true, 00:08:04.026 "claim_type": "exclusive_write", 00:08:04.026 "zoned": false, 00:08:04.026 "supported_io_types": { 00:08:04.026 "read": true, 00:08:04.026 "write": true, 00:08:04.026 "unmap": true, 00:08:04.026 "write_zeroes": true, 00:08:04.026 "flush": true, 00:08:04.026 "reset": true, 00:08:04.026 "compare": false, 00:08:04.026 "compare_and_write": false, 00:08:04.026 "abort": true, 00:08:04.026 "nvme_admin": false, 00:08:04.026 "nvme_io": false 00:08:04.026 }, 00:08:04.026 "memory_domains": [ 00:08:04.026 { 00:08:04.026 "dma_device_id": "system", 00:08:04.026 "dma_device_type": 1 00:08:04.026 }, 00:08:04.026 { 00:08:04.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.026 "dma_device_type": 2 00:08:04.026 } 00:08:04.026 ], 00:08:04.026 "driver_specific": {} 00:08:04.026 } 00:08:04.026 ]' 00:08:04.026 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.283 19:55:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:04.848 19:55:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:04.848 19:55:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:08:04.848 19:55:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:04.848 19:55:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:04.848 19:55:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:07.372 19:55:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:07.629 19:55:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.559 ************************************ 00:08:08.559 START TEST filesystem_in_capsule_ext4 00:08:08.559 ************************************ 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:08:08.559 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:08.559 mke2fs 1.46.5 (30-Dec-2021) 00:08:08.816 Discarding device blocks: 0/522240 done 00:08:08.816 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:08.816 Filesystem UUID: 25d945d0-bd8a-4f29-95ac-3a850e7d86f5 00:08:08.816 Superblock backups stored on blocks: 00:08:08.816 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:08.816 00:08:08.816 Allocating group tables: 0/64 done 00:08:08.816 Writing inode tables: 0/64 done 00:08:08.816 Creating journal (8192 blocks): done 00:08:08.816 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.816 00:08:08.816 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:08:08.816 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.816 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3084980 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.074 00:08:09.074 real 0m0.361s 00:08:09.074 user 0m0.018s 00:08:09.074 sys 0m0.052s 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:09.074 ************************************ 00:08:09.074 END TEST filesystem_in_capsule_ext4 00:08:09.074 ************************************ 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.074 ************************************ 00:08:09.074 START TEST filesystem_in_capsule_btrfs 00:08:09.074 ************************************ 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:08:09.074 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.355 btrfs-progs v6.6.2 00:08:09.355 See https://btrfs.readthedocs.io for more information. 00:08:09.355 00:08:09.355 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.355 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.355 this does not affect your deployments: 00:08:09.355 - DUP for metadata (-m dup) 00:08:09.355 - enabled no-holes (-O no-holes) 00:08:09.355 - enabled free-space-tree (-R free-space-tree) 00:08:09.355 00:08:09.355 Label: (null) 00:08:09.355 UUID: 1e53a38e-0a87-4e7e-8f2e-8488576ce0fb 00:08:09.355 Node size: 16384 00:08:09.355 Sector size: 4096 00:08:09.355 Filesystem size: 510.00MiB 00:08:09.355 Block group profiles: 00:08:09.355 Data: single 8.00MiB 00:08:09.355 Metadata: DUP 32.00MiB 00:08:09.355 System: DUP 8.00MiB 00:08:09.355 SSD detected: yes 00:08:09.355 Zoned device: no 00:08:09.355 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.355 Runtime features: free-space-tree 00:08:09.355 Checksum: crc32c 00:08:09.355 Number of devices: 1 00:08:09.355 Devices: 00:08:09.355 ID SIZE PATH 00:08:09.355 1 510.00MiB /dev/nvme0n1p1 00:08:09.355 00:08:09.355 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:08:09.355 19:55:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3084980 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.304 00:08:10.304 real 0m1.182s 00:08:10.304 user 0m0.024s 00:08:10.304 sys 0m0.118s 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:10.304 ************************************ 00:08:10.304 END TEST filesystem_in_capsule_btrfs 00:08:10.304 ************************************ 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.304 ************************************ 00:08:10.304 START TEST filesystem_in_capsule_xfs 00:08:10.304 ************************************ 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:08:10.304 19:55:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:10.304 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:10.304 = sectsz=512 attr=2, projid32bit=1 00:08:10.304 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:10.304 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:10.304 data = bsize=4096 blocks=130560, imaxpct=25 00:08:10.304 = sunit=0 swidth=0 blks 00:08:10.304 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:10.304 log =internal log bsize=4096 blocks=16384, version=2 00:08:10.304 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:10.304 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:11.678 Discarding blocks...Done. 00:08:11.678 19:55:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:08:11.678 19:55:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3084980 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.578 00:08:13.578 real 0m2.977s 00:08:13.578 user 0m0.023s 00:08:13.578 sys 0m0.052s 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.578 ************************************ 00:08:13.578 END TEST filesystem_in_capsule_xfs 00:08:13.578 ************************************ 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.578 19:56:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.578 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.578 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:08:13.578 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3084980 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3084980 ']' 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3084980 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3084980 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3084980' 00:08:13.579 killing process with pid 3084980 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3084980 00:08:13.579 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3084980 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:14.147 00:08:14.147 real 0m10.435s 00:08:14.147 user 0m39.979s 00:08:14.147 sys 0m1.638s 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.147 ************************************ 00:08:14.147 END TEST nvmf_filesystem_in_capsule 00:08:14.147 ************************************ 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.147 rmmod nvme_tcp 00:08:14.147 rmmod nvme_fabrics 00:08:14.147 rmmod nvme_keyring 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.147 19:56:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.050 19:56:03 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.050 00:08:16.050 real 0m27.597s 00:08:16.050 user 1m29.677s 00:08:16.050 sys 0m4.989s 00:08:16.050 19:56:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:16.050 19:56:03 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.050 ************************************ 00:08:16.050 END TEST nvmf_filesystem 00:08:16.050 ************************************ 00:08:16.050 19:56:03 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.050 19:56:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:16.050 19:56:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:16.050 19:56:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.050 ************************************ 00:08:16.050 START TEST nvmf_target_discovery 00:08:16.050 ************************************ 00:08:16.050 19:56:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.309 * Looking for test storage... 00:08:16.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.309 19:56:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.215 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:18.474 00:08:18.474 --- 10.0.0.2 ping statistics --- 00:08:18.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.474 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:18.474 00:08:18.474 --- 10.0.0.1 ping statistics --- 00:08:18.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.474 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3088328 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3088328 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3088328 ']' 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:18.474 19:56:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.474 [2024-07-13 19:56:06.004417] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:18.474 [2024-07-13 19:56:06.004498] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.474 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.474 [2024-07-13 19:56:06.069466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.733 [2024-07-13 19:56:06.160439] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.733 [2024-07-13 19:56:06.160495] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.733 [2024-07-13 19:56:06.160523] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.733 [2024-07-13 19:56:06.160535] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.733 [2024-07-13 19:56:06.160545] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.733 [2024-07-13 19:56:06.160624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.733 [2024-07-13 19:56:06.160672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.733 [2024-07-13 19:56:06.160757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.733 [2024-07-13 19:56:06.160760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 [2024-07-13 19:56:06.316606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 Null1 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 [2024-07-13 19:56:06.356948] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 Null2 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.733 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.991 Null3 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.991 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 Null4 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:18.992 00:08:18.992 Discovery Log Number of Records 6, Generation counter 6 00:08:18.992 =====Discovery Log Entry 0====== 00:08:18.992 trtype: tcp 00:08:18.992 adrfam: ipv4 00:08:18.992 subtype: current discovery subsystem 00:08:18.992 treq: not required 00:08:18.992 portid: 0 00:08:18.992 trsvcid: 4420 00:08:18.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.992 traddr: 10.0.0.2 00:08:18.992 eflags: explicit discovery connections, duplicate discovery information 00:08:18.992 sectype: none 00:08:18.992 =====Discovery Log Entry 1====== 00:08:18.992 trtype: tcp 00:08:18.992 adrfam: ipv4 00:08:18.992 subtype: nvme subsystem 00:08:18.992 treq: not required 00:08:18.992 portid: 0 00:08:18.992 trsvcid: 4420 00:08:18.992 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:18.992 traddr: 10.0.0.2 00:08:18.992 eflags: none 00:08:18.992 sectype: none 00:08:18.992 =====Discovery Log Entry 2====== 00:08:18.992 trtype: tcp 00:08:18.992 adrfam: ipv4 00:08:18.992 subtype: nvme subsystem 00:08:18.992 treq: not required 00:08:18.992 portid: 0 00:08:18.992 trsvcid: 4420 00:08:18.992 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:18.992 traddr: 10.0.0.2 00:08:18.992 eflags: none 00:08:18.992 sectype: none 00:08:18.992 =====Discovery Log Entry 3====== 00:08:18.992 trtype: tcp 00:08:18.992 adrfam: ipv4 00:08:18.992 subtype: nvme subsystem 00:08:18.992 treq: not required 00:08:18.992 portid: 0 00:08:18.992 trsvcid: 4420 00:08:18.992 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:18.992 traddr: 10.0.0.2 00:08:18.992 eflags: none 00:08:18.992 sectype: none 00:08:18.992 =====Discovery Log Entry 4====== 00:08:18.992 trtype: tcp 00:08:18.992 adrfam: ipv4 00:08:18.992 subtype: nvme subsystem 00:08:18.992 treq: not required 00:08:18.992 portid: 0 00:08:18.992 trsvcid: 4420 00:08:18.992 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:18.992 traddr: 10.0.0.2 00:08:18.992 eflags: none 00:08:18.992 sectype: none 00:08:18.992 =====Discovery Log Entry 5====== 00:08:18.992 trtype: tcp 00:08:18.992 adrfam: ipv4 00:08:18.992 subtype: discovery subsystem referral 00:08:18.992 treq: not required 00:08:18.992 portid: 0 00:08:18.992 trsvcid: 4430 00:08:18.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:18.992 traddr: 10.0.0.2 00:08:18.992 eflags: none 00:08:18.992 sectype: none 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:18.992 Perform nvmf subsystem discovery via RPC 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.992 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.992 [ 00:08:18.992 { 00:08:18.992 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:18.992 "subtype": "Discovery", 00:08:18.992 "listen_addresses": [ 00:08:18.992 { 00:08:18.992 "trtype": "TCP", 00:08:18.992 "adrfam": "IPv4", 00:08:18.992 "traddr": "10.0.0.2", 00:08:18.992 "trsvcid": "4420" 00:08:18.992 } 00:08:18.992 ], 00:08:18.992 "allow_any_host": true, 00:08:18.992 "hosts": [] 00:08:18.992 }, 00:08:18.992 { 00:08:18.992 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.992 "subtype": "NVMe", 00:08:18.992 "listen_addresses": [ 00:08:18.992 { 00:08:18.992 "trtype": "TCP", 00:08:18.992 "adrfam": "IPv4", 00:08:18.992 "traddr": "10.0.0.2", 00:08:18.992 "trsvcid": "4420" 00:08:18.992 } 00:08:18.992 ], 00:08:18.992 "allow_any_host": true, 00:08:18.992 "hosts": [], 00:08:18.992 "serial_number": "SPDK00000000000001", 00:08:18.992 "model_number": "SPDK bdev Controller", 00:08:18.992 "max_namespaces": 32, 00:08:18.992 "min_cntlid": 1, 00:08:18.992 "max_cntlid": 65519, 00:08:18.992 "namespaces": [ 00:08:18.992 { 00:08:18.992 "nsid": 1, 00:08:18.992 "bdev_name": "Null1", 00:08:18.992 "name": "Null1", 00:08:18.992 "nguid": "CEE8201DE35C48CFA5662F99E5DFD15D", 00:08:18.992 "uuid": "cee8201d-e35c-48cf-a566-2f99e5dfd15d" 00:08:18.992 } 00:08:18.992 ] 00:08:18.992 }, 00:08:18.992 { 00:08:18.992 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:18.992 "subtype": "NVMe", 00:08:18.992 "listen_addresses": [ 00:08:18.992 { 00:08:18.992 "trtype": "TCP", 00:08:18.992 "adrfam": "IPv4", 00:08:18.992 "traddr": "10.0.0.2", 00:08:18.992 "trsvcid": "4420" 00:08:18.992 } 00:08:18.992 ], 00:08:19.251 "allow_any_host": true, 00:08:19.251 "hosts": [], 00:08:19.251 "serial_number": "SPDK00000000000002", 00:08:19.251 "model_number": "SPDK bdev Controller", 00:08:19.251 "max_namespaces": 32, 00:08:19.251 "min_cntlid": 1, 00:08:19.251 "max_cntlid": 65519, 00:08:19.251 "namespaces": [ 00:08:19.251 { 00:08:19.251 "nsid": 1, 00:08:19.251 "bdev_name": "Null2", 00:08:19.251 "name": "Null2", 00:08:19.251 "nguid": "11C828EA0DEC4DC7B1A2A4CDAFC21721", 00:08:19.251 "uuid": "11c828ea-0dec-4dc7-b1a2-a4cdafc21721" 00:08:19.251 } 00:08:19.251 ] 00:08:19.251 }, 00:08:19.251 { 00:08:19.251 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:19.251 "subtype": "NVMe", 00:08:19.251 "listen_addresses": [ 00:08:19.251 { 00:08:19.251 "trtype": "TCP", 00:08:19.251 "adrfam": "IPv4", 00:08:19.251 "traddr": "10.0.0.2", 00:08:19.251 "trsvcid": "4420" 00:08:19.251 } 00:08:19.251 ], 00:08:19.251 "allow_any_host": true, 00:08:19.251 "hosts": [], 00:08:19.251 "serial_number": "SPDK00000000000003", 00:08:19.251 "model_number": "SPDK bdev Controller", 00:08:19.251 "max_namespaces": 32, 00:08:19.251 "min_cntlid": 1, 00:08:19.251 "max_cntlid": 65519, 00:08:19.251 "namespaces": [ 00:08:19.251 { 00:08:19.251 "nsid": 1, 00:08:19.251 "bdev_name": "Null3", 00:08:19.251 "name": "Null3", 00:08:19.251 "nguid": "FE1295303E044DFE96EE0BA5F79D43F4", 00:08:19.251 "uuid": "fe129530-3e04-4dfe-96ee-0ba5f79d43f4" 00:08:19.251 } 00:08:19.251 ] 00:08:19.251 }, 00:08:19.251 { 00:08:19.251 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:19.251 "subtype": "NVMe", 00:08:19.251 "listen_addresses": [ 00:08:19.251 { 00:08:19.251 "trtype": "TCP", 00:08:19.251 "adrfam": "IPv4", 00:08:19.251 "traddr": "10.0.0.2", 00:08:19.251 "trsvcid": "4420" 00:08:19.251 } 00:08:19.251 ], 00:08:19.252 "allow_any_host": true, 00:08:19.252 "hosts": [], 00:08:19.252 "serial_number": "SPDK00000000000004", 00:08:19.252 "model_number": "SPDK bdev Controller", 00:08:19.252 "max_namespaces": 32, 00:08:19.252 "min_cntlid": 1, 00:08:19.252 "max_cntlid": 65519, 00:08:19.252 "namespaces": [ 00:08:19.252 { 00:08:19.252 "nsid": 1, 00:08:19.252 "bdev_name": "Null4", 00:08:19.252 "name": "Null4", 00:08:19.252 "nguid": "AA580EE91ECB4AEA943AAFAD65F09313", 00:08:19.252 "uuid": "aa580ee9-1ecb-4aea-943a-afad65f09313" 00:08:19.252 } 00:08:19.252 ] 00:08:19.252 } 00:08:19.252 ] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.252 rmmod nvme_tcp 00:08:19.252 rmmod nvme_fabrics 00:08:19.252 rmmod nvme_keyring 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3088328 ']' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3088328 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3088328 ']' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3088328 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3088328 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3088328' 00:08:19.252 killing process with pid 3088328 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3088328 00:08:19.252 19:56:06 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3088328 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.511 19:56:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.062 19:56:09 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.062 00:08:22.062 real 0m5.413s 00:08:22.062 user 0m4.371s 00:08:22.062 sys 0m1.868s 00:08:22.062 19:56:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.062 19:56:09 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.062 ************************************ 00:08:22.062 END TEST nvmf_target_discovery 00:08:22.062 ************************************ 00:08:22.062 19:56:09 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:22.062 19:56:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:22.062 19:56:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.062 19:56:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.062 ************************************ 00:08:22.062 START TEST nvmf_referrals 00:08:22.062 ************************************ 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:22.062 * Looking for test storage... 00:08:22.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.062 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.063 19:56:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.965 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.966 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.966 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.966 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:08:23.966 00:08:23.966 --- 10.0.0.2 ping statistics --- 00:08:23.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.966 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:23.966 00:08:23.966 --- 10.0.0.1 ping statistics --- 00:08:23.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.966 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3090410 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3090410 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3090410 ']' 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:23.966 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.966 [2024-07-13 19:56:11.464115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:23.966 [2024-07-13 19:56:11.464210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.966 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.966 [2024-07-13 19:56:11.535200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.224 [2024-07-13 19:56:11.626444] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.224 [2024-07-13 19:56:11.626518] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.224 [2024-07-13 19:56:11.626537] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.224 [2024-07-13 19:56:11.626549] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.224 [2024-07-13 19:56:11.626559] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.224 [2024-07-13 19:56:11.626612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.225 [2024-07-13 19:56:11.626673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.225 [2024-07-13 19:56:11.626723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.225 [2024-07-13 19:56:11.626725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 [2024-07-13 19:56:11.787685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 [2024-07-13 19:56:11.799954] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.225 19:56:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.482 19:56:11 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.482 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.738 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.739 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.739 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.739 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:24.995 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:24.996 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:24.996 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.996 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.253 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.510 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:25.510 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.510 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:25.510 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.510 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.510 19:56:12 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:25.510 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.820 rmmod nvme_tcp 00:08:25.820 rmmod nvme_fabrics 00:08:25.820 rmmod nvme_keyring 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3090410 ']' 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3090410 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3090410 ']' 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3090410 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3090410 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3090410' 00:08:25.820 killing process with pid 3090410 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3090410 00:08:25.820 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3090410 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.079 19:56:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.987 19:56:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.987 00:08:27.987 real 0m6.359s 00:08:27.987 user 0m8.629s 00:08:27.987 sys 0m2.214s 00:08:27.987 19:56:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.987 19:56:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:27.987 ************************************ 00:08:27.987 END TEST nvmf_referrals 00:08:27.987 ************************************ 00:08:27.987 19:56:15 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:27.987 19:56:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:27.987 19:56:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.987 19:56:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.987 ************************************ 00:08:27.987 START TEST nvmf_connect_disconnect 00:08:27.987 ************************************ 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:27.987 * Looking for test storage... 00:08:27.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.987 19:56:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.516 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.516 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.516 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.516 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.516 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:08:30.517 00:08:30.517 --- 10.0.0.2 ping statistics --- 00:08:30.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.517 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:08:30.517 00:08:30.517 --- 10.0.0.1 ping statistics --- 00:08:30.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.517 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3092586 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3092586 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3092586 ']' 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:30.517 19:56:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.517 [2024-07-13 19:56:17.854131] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:08:30.517 [2024-07-13 19:56:17.854221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.517 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.517 [2024-07-13 19:56:17.925151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.517 [2024-07-13 19:56:18.022466] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.517 [2024-07-13 19:56:18.022512] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.517 [2024-07-13 19:56:18.022529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.517 [2024-07-13 19:56:18.022543] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.517 [2024-07-13 19:56:18.022555] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.517 [2024-07-13 19:56:18.022627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.517 [2024-07-13 19:56:18.022680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.517 [2024-07-13 19:56:18.022730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.517 [2024-07-13 19:56:18.022733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.517 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:30.517 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:30.517 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.517 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:30.517 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.775 [2024-07-13 19:56:18.184860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.775 [2024-07-13 19:56:18.246294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:30.775 19:56:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:33.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.934 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.710 rmmod nvme_tcp 00:12:21.710 rmmod nvme_fabrics 00:12:21.710 rmmod nvme_keyring 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3092586 ']' 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3092586 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3092586 ']' 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3092586 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3092586 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3092586' 00:12:21.710 killing process with pid 3092586 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3092586 00:12:21.710 20:00:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3092586 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.710 20:00:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.615 20:00:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:23.615 00:12:23.615 real 3m55.585s 00:12:23.615 user 14m57.563s 00:12:23.615 sys 0m34.160s 00:12:23.615 20:00:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.615 20:00:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:23.615 ************************************ 00:12:23.615 END TEST nvmf_connect_disconnect 00:12:23.615 ************************************ 00:12:23.615 20:00:11 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:23.616 20:00:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:23.616 20:00:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.616 20:00:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:23.616 ************************************ 00:12:23.616 START TEST nvmf_multitarget 00:12:23.616 ************************************ 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:23.616 * Looking for test storage... 00:12:23.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:23.616 20:00:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.144 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:26.145 00:12:26.145 --- 10.0.0.2 ping statistics --- 00:12:26.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.145 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:12:26.145 00:12:26.145 --- 10.0.0.1 ping statistics --- 00:12:26.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.145 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3124297 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3124297 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3124297 ']' 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.145 [2024-07-13 20:00:13.410803] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:26.145 [2024-07-13 20:00:13.410913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.145 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.145 [2024-07-13 20:00:13.481517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.145 [2024-07-13 20:00:13.575471] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.145 [2024-07-13 20:00:13.575534] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.145 [2024-07-13 20:00:13.575560] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.145 [2024-07-13 20:00:13.575573] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.145 [2024-07-13 20:00:13.575584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.145 [2024-07-13 20:00:13.575668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.145 [2024-07-13 20:00:13.575723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.145 [2024-07-13 20:00:13.575779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.145 [2024-07-13 20:00:13.575782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.145 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:26.414 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:26.414 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:26.414 "nvmf_tgt_1" 00:12:26.414 20:00:13 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:26.414 "nvmf_tgt_2" 00:12:26.414 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.414 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:26.672 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:26.672 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:26.672 true 00:12:26.672 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:26.931 true 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.931 rmmod nvme_tcp 00:12:26.931 rmmod nvme_fabrics 00:12:26.931 rmmod nvme_keyring 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3124297 ']' 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3124297 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3124297 ']' 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3124297 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3124297 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3124297' 00:12:26.931 killing process with pid 3124297 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3124297 00:12:26.931 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3124297 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.189 20:00:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.773 20:00:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.773 00:12:29.773 real 0m5.656s 00:12:29.773 user 0m6.295s 00:12:29.773 sys 0m1.883s 00:12:29.773 20:00:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:29.773 20:00:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.773 ************************************ 00:12:29.773 END TEST nvmf_multitarget 00:12:29.773 ************************************ 00:12:29.773 20:00:16 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.773 20:00:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:29.773 20:00:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:29.773 20:00:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.773 ************************************ 00:12:29.773 START TEST nvmf_rpc 00:12:29.773 ************************************ 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.773 * Looking for test storage... 00:12:29.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.773 20:00:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:31.676 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.677 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.677 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.677 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:12:31.677 00:12:31.677 --- 10.0.0.2 ping statistics --- 00:12:31.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.677 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:12:31.677 00:12:31.677 --- 10.0.0.1 ping statistics --- 00:12:31.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.677 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3126396 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.677 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3126396 00:12:31.678 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3126396 ']' 00:12:31.678 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.678 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:31.678 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.678 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:31.678 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.678 [2024-07-13 20:00:19.266917] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:31.678 [2024-07-13 20:00:19.266992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.678 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.934 [2024-07-13 20:00:19.336659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.934 [2024-07-13 20:00:19.433436] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.934 [2024-07-13 20:00:19.433489] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.934 [2024-07-13 20:00:19.433513] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.934 [2024-07-13 20:00:19.433527] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.934 [2024-07-13 20:00:19.433540] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.934 [2024-07-13 20:00:19.433615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.934 [2024-07-13 20:00:19.433679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.934 [2024-07-13 20:00:19.433699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.934 [2024-07-13 20:00:19.433701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.934 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:31.934 "tick_rate": 2700000000, 00:12:31.934 "poll_groups": [ 00:12:31.934 { 00:12:31.934 "name": "nvmf_tgt_poll_group_000", 00:12:31.934 "admin_qpairs": 0, 00:12:31.934 "io_qpairs": 0, 00:12:31.934 "current_admin_qpairs": 0, 00:12:31.934 "current_io_qpairs": 0, 00:12:31.934 "pending_bdev_io": 0, 00:12:31.934 "completed_nvme_io": 0, 00:12:31.934 "transports": [] 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "name": "nvmf_tgt_poll_group_001", 00:12:31.934 "admin_qpairs": 0, 00:12:31.934 "io_qpairs": 0, 00:12:31.934 "current_admin_qpairs": 0, 00:12:31.934 "current_io_qpairs": 0, 00:12:31.934 "pending_bdev_io": 0, 00:12:31.934 "completed_nvme_io": 0, 00:12:31.934 "transports": [] 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "name": "nvmf_tgt_poll_group_002", 00:12:31.934 "admin_qpairs": 0, 00:12:31.934 "io_qpairs": 0, 00:12:31.934 "current_admin_qpairs": 0, 00:12:31.934 "current_io_qpairs": 0, 00:12:31.934 "pending_bdev_io": 0, 00:12:31.934 "completed_nvme_io": 0, 00:12:31.934 "transports": [] 00:12:31.934 }, 00:12:31.934 { 00:12:31.934 "name": "nvmf_tgt_poll_group_003", 00:12:31.934 "admin_qpairs": 0, 00:12:31.934 "io_qpairs": 0, 00:12:31.934 "current_admin_qpairs": 0, 00:12:31.934 "current_io_qpairs": 0, 00:12:31.934 "pending_bdev_io": 0, 00:12:31.934 "completed_nvme_io": 0, 00:12:31.934 "transports": [] 00:12:31.934 } 00:12:31.934 ] 00:12:31.934 }' 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 [2024-07-13 20:00:19.659893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.192 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:32.192 "tick_rate": 2700000000, 00:12:32.192 "poll_groups": [ 00:12:32.192 { 00:12:32.192 "name": "nvmf_tgt_poll_group_000", 00:12:32.192 "admin_qpairs": 0, 00:12:32.192 "io_qpairs": 0, 00:12:32.192 "current_admin_qpairs": 0, 00:12:32.192 "current_io_qpairs": 0, 00:12:32.192 "pending_bdev_io": 0, 00:12:32.192 "completed_nvme_io": 0, 00:12:32.192 "transports": [ 00:12:32.192 { 00:12:32.192 "trtype": "TCP" 00:12:32.192 } 00:12:32.192 ] 00:12:32.192 }, 00:12:32.192 { 00:12:32.192 "name": "nvmf_tgt_poll_group_001", 00:12:32.192 "admin_qpairs": 0, 00:12:32.192 "io_qpairs": 0, 00:12:32.192 "current_admin_qpairs": 0, 00:12:32.192 "current_io_qpairs": 0, 00:12:32.192 "pending_bdev_io": 0, 00:12:32.192 "completed_nvme_io": 0, 00:12:32.192 "transports": [ 00:12:32.192 { 00:12:32.192 "trtype": "TCP" 00:12:32.192 } 00:12:32.192 ] 00:12:32.192 }, 00:12:32.193 { 00:12:32.193 "name": "nvmf_tgt_poll_group_002", 00:12:32.193 "admin_qpairs": 0, 00:12:32.193 "io_qpairs": 0, 00:12:32.193 "current_admin_qpairs": 0, 00:12:32.193 "current_io_qpairs": 0, 00:12:32.193 "pending_bdev_io": 0, 00:12:32.193 "completed_nvme_io": 0, 00:12:32.193 "transports": [ 00:12:32.193 { 00:12:32.193 "trtype": "TCP" 00:12:32.193 } 00:12:32.193 ] 00:12:32.193 }, 00:12:32.193 { 00:12:32.193 "name": "nvmf_tgt_poll_group_003", 00:12:32.193 "admin_qpairs": 0, 00:12:32.193 "io_qpairs": 0, 00:12:32.193 "current_admin_qpairs": 0, 00:12:32.193 "current_io_qpairs": 0, 00:12:32.193 "pending_bdev_io": 0, 00:12:32.193 "completed_nvme_io": 0, 00:12:32.193 "transports": [ 00:12:32.193 { 00:12:32.193 "trtype": "TCP" 00:12:32.193 } 00:12:32.193 ] 00:12:32.193 } 00:12:32.193 ] 00:12:32.193 }' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 Malloc1 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.193 [2024-07-13 20:00:19.813329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.193 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:32.193 [2024-07-13 20:00:19.835738] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:32.453 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.453 could not add new controller: failed to write to nvme-fabrics device 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.453 20:00:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.024 20:00:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.024 20:00:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:33.024 20:00:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.024 20:00:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:33.024 20:00:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:34.926 20:00:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.927 20:00:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.927 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:34.927 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:34.927 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:35.184 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.185 [2024-07-13 20:00:22.625052] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:35.185 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.185 could not add new controller: failed to write to nvme-fabrics device 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.185 20:00:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.753 20:00:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.753 20:00:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:35.753 20:00:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.753 20:00:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:35.753 20:00:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:37.651 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.911 [2024-07-13 20:00:25.380691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.911 20:00:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.480 20:00:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.480 20:00:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.480 20:00:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.480 20:00:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:38.480 20:00:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 [2024-07-13 20:00:28.199002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.009 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.268 20:00:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.268 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:41.268 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.268 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:41.268 20:00:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:43.800 20:00:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.800 [2024-07-13 20:00:31.055784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.800 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.365 20:00:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.365 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:44.365 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.365 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:44.365 20:00:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:46.279 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 [2024-07-13 20:00:33.886461] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.280 20:00:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.260 20:00:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.260 20:00:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:47.260 20:00:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.260 20:00:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:47.260 20:00:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 [2024-07-13 20:00:36.798505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.172 20:00:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.107 20:00:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.107 20:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:12:50.107 20:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.107 20:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:50.107 20:00:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 [2024-07-13 20:00:39.611714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 [2024-07-13 20:00:39.659794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.022 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 [2024-07-13 20:00:39.708106] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 [2024-07-13 20:00:39.756174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.293 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 [2024-07-13 20:00:39.804342] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:52.294 "tick_rate": 2700000000, 00:12:52.294 "poll_groups": [ 00:12:52.294 { 00:12:52.294 "name": "nvmf_tgt_poll_group_000", 00:12:52.294 "admin_qpairs": 2, 00:12:52.294 "io_qpairs": 84, 00:12:52.294 "current_admin_qpairs": 0, 00:12:52.294 "current_io_qpairs": 0, 00:12:52.294 "pending_bdev_io": 0, 00:12:52.294 "completed_nvme_io": 136, 00:12:52.294 "transports": [ 00:12:52.294 { 00:12:52.294 "trtype": "TCP" 00:12:52.294 } 00:12:52.294 ] 00:12:52.294 }, 00:12:52.294 { 00:12:52.294 "name": "nvmf_tgt_poll_group_001", 00:12:52.294 "admin_qpairs": 2, 00:12:52.294 "io_qpairs": 84, 00:12:52.294 "current_admin_qpairs": 0, 00:12:52.294 "current_io_qpairs": 0, 00:12:52.294 "pending_bdev_io": 0, 00:12:52.294 "completed_nvme_io": 183, 00:12:52.294 "transports": [ 00:12:52.294 { 00:12:52.294 "trtype": "TCP" 00:12:52.294 } 00:12:52.294 ] 00:12:52.294 }, 00:12:52.294 { 00:12:52.294 "name": "nvmf_tgt_poll_group_002", 00:12:52.294 "admin_qpairs": 1, 00:12:52.294 "io_qpairs": 84, 00:12:52.294 "current_admin_qpairs": 0, 00:12:52.294 "current_io_qpairs": 0, 00:12:52.294 "pending_bdev_io": 0, 00:12:52.294 "completed_nvme_io": 183, 00:12:52.294 "transports": [ 00:12:52.294 { 00:12:52.294 "trtype": "TCP" 00:12:52.294 } 00:12:52.294 ] 00:12:52.294 }, 00:12:52.294 { 00:12:52.294 "name": "nvmf_tgt_poll_group_003", 00:12:52.294 "admin_qpairs": 2, 00:12:52.294 "io_qpairs": 84, 00:12:52.294 "current_admin_qpairs": 0, 00:12:52.294 "current_io_qpairs": 0, 00:12:52.294 "pending_bdev_io": 0, 00:12:52.294 "completed_nvme_io": 184, 00:12:52.294 "transports": [ 00:12:52.294 { 00:12:52.294 "trtype": "TCP" 00:12:52.294 } 00:12:52.294 ] 00:12:52.294 } 00:12:52.294 ] 00:12:52.294 }' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.294 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.294 rmmod nvme_tcp 00:12:52.552 rmmod nvme_fabrics 00:12:52.552 rmmod nvme_keyring 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3126396 ']' 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3126396 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3126396 ']' 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3126396 00:12:52.552 20:00:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3126396 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3126396' 00:12:52.552 killing process with pid 3126396 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3126396 00:12:52.552 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3126396 00:12:52.812 20:00:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.812 20:00:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.813 20:00:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.813 20:00:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.813 20:00:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.813 20:00:40 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.813 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.813 20:00:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.715 20:00:42 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.715 00:12:54.715 real 0m25.403s 00:12:54.715 user 1m22.519s 00:12:54.715 sys 0m4.138s 00:12:54.715 20:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:54.715 20:00:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.715 ************************************ 00:12:54.715 END TEST nvmf_rpc 00:12:54.715 ************************************ 00:12:54.715 20:00:42 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.715 20:00:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:54.715 20:00:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:54.715 20:00:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:54.715 ************************************ 00:12:54.715 START TEST nvmf_invalid 00:12:54.715 ************************************ 00:12:54.715 20:00:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:54.973 * Looking for test storage... 00:12:54.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:54.973 20:00:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.871 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:56.872 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:12:57.131 00:12:57.131 --- 10.0.0.2 ping statistics --- 00:12:57.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.131 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:12:57.131 00:12:57.131 --- 10.0.0.1 ping statistics --- 00:12:57.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.131 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3130943 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3130943 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3130943 ']' 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:57.131 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.131 [2024-07-13 20:00:44.634724] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:12:57.131 [2024-07-13 20:00:44.634798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.131 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.131 [2024-07-13 20:00:44.701219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.388 [2024-07-13 20:00:44.792027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.389 [2024-07-13 20:00:44.792087] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.389 [2024-07-13 20:00:44.792102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.389 [2024-07-13 20:00:44.792114] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.389 [2024-07-13 20:00:44.792124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.389 [2024-07-13 20:00:44.792262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.389 [2024-07-13 20:00:44.792324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.389 [2024-07-13 20:00:44.792389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.389 [2024-07-13 20:00:44.792395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:57.389 20:00:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17740 00:12:57.646 [2024-07-13 20:00:45.216517] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:57.646 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:57.646 { 00:12:57.646 "nqn": "nqn.2016-06.io.spdk:cnode17740", 00:12:57.646 "tgt_name": "foobar", 00:12:57.646 "method": "nvmf_create_subsystem", 00:12:57.646 "req_id": 1 00:12:57.646 } 00:12:57.646 Got JSON-RPC error response 00:12:57.646 response: 00:12:57.646 { 00:12:57.646 "code": -32603, 00:12:57.646 "message": "Unable to find target foobar" 00:12:57.646 }' 00:12:57.646 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:57.646 { 00:12:57.646 "nqn": "nqn.2016-06.io.spdk:cnode17740", 00:12:57.646 "tgt_name": "foobar", 00:12:57.646 "method": "nvmf_create_subsystem", 00:12:57.646 "req_id": 1 00:12:57.646 } 00:12:57.646 Got JSON-RPC error response 00:12:57.646 response: 00:12:57.646 { 00:12:57.646 "code": -32603, 00:12:57.646 "message": "Unable to find target foobar" 00:12:57.646 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:57.646 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:57.646 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13178 00:12:57.904 [2024-07-13 20:00:45.465359] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13178: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:57.904 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:57.904 { 00:12:57.904 "nqn": "nqn.2016-06.io.spdk:cnode13178", 00:12:57.904 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:57.904 "method": "nvmf_create_subsystem", 00:12:57.904 "req_id": 1 00:12:57.904 } 00:12:57.904 Got JSON-RPC error response 00:12:57.904 response: 00:12:57.904 { 00:12:57.904 "code": -32602, 00:12:57.904 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:57.904 }' 00:12:57.904 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:57.904 { 00:12:57.904 "nqn": "nqn.2016-06.io.spdk:cnode13178", 00:12:57.904 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:57.904 "method": "nvmf_create_subsystem", 00:12:57.904 "req_id": 1 00:12:57.904 } 00:12:57.904 Got JSON-RPC error response 00:12:57.904 response: 00:12:57.904 { 00:12:57.904 "code": -32602, 00:12:57.904 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:57.904 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:57.904 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:57.904 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23374 00:12:58.162 [2024-07-13 20:00:45.734246] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23374: invalid model number 'SPDK_Controller' 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:58.162 { 00:12:58.162 "nqn": "nqn.2016-06.io.spdk:cnode23374", 00:12:58.162 "model_number": "SPDK_Controller\u001f", 00:12:58.162 "method": "nvmf_create_subsystem", 00:12:58.162 "req_id": 1 00:12:58.162 } 00:12:58.162 Got JSON-RPC error response 00:12:58.162 response: 00:12:58.162 { 00:12:58.162 "code": -32602, 00:12:58.162 "message": "Invalid MN SPDK_Controller\u001f" 00:12:58.162 }' 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:58.162 { 00:12:58.162 "nqn": "nqn.2016-06.io.spdk:cnode23374", 00:12:58.162 "model_number": "SPDK_Controller\u001f", 00:12:58.162 "method": "nvmf_create_subsystem", 00:12:58.162 "req_id": 1 00:12:58.162 } 00:12:58.162 Got JSON-RPC error response 00:12:58.162 response: 00:12:58.162 { 00:12:58.162 "code": -32602, 00:12:58.162 "message": "Invalid MN SPDK_Controller\u001f" 00:12:58.162 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:58.162 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '(r0uRCUS$ @)7=AV~n~DN' 00:12:58.163 20:00:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '(r0uRCUS$ @)7=AV~n~DN' nqn.2016-06.io.spdk:cnode24386 00:12:58.421 [2024-07-13 20:00:46.031244] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24386: invalid serial number '(r0uRCUS$ @)7=AV~n~DN' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:58.421 { 00:12:58.421 "nqn": "nqn.2016-06.io.spdk:cnode24386", 00:12:58.421 "serial_number": "(r0uRCUS$ @)7=AV~n~DN", 00:12:58.421 "method": "nvmf_create_subsystem", 00:12:58.421 "req_id": 1 00:12:58.421 } 00:12:58.421 Got JSON-RPC error response 00:12:58.421 response: 00:12:58.421 { 00:12:58.421 "code": -32602, 00:12:58.421 "message": "Invalid SN (r0uRCUS$ @)7=AV~n~DN" 00:12:58.421 }' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:58.421 { 00:12:58.421 "nqn": "nqn.2016-06.io.spdk:cnode24386", 00:12:58.421 "serial_number": "(r0uRCUS$ @)7=AV~n~DN", 00:12:58.421 "method": "nvmf_create_subsystem", 00:12:58.421 "req_id": 1 00:12:58.421 } 00:12:58.421 Got JSON-RPC error response 00:12:58.421 response: 00:12:58.421 { 00:12:58.421 "code": -32602, 00:12:58.421 "message": "Invalid SN (r0uRCUS$ @)7=AV~n~DN" 00:12:58.421 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.421 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.678 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ! == \- ]] 00:12:58.679 20:00:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '!z /dev/null' 00:13:01.259 20:00:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.828 20:00:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.828 00:13:03.828 real 0m8.512s 00:13:03.828 user 0m19.691s 00:13:03.828 sys 0m2.364s 00:13:03.828 20:00:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:03.828 20:00:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.828 ************************************ 00:13:03.828 END TEST nvmf_invalid 00:13:03.828 ************************************ 00:13:03.828 20:00:50 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:03.828 20:00:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:03.828 20:00:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:03.828 20:00:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:03.828 ************************************ 00:13:03.828 START TEST nvmf_abort 00:13:03.828 ************************************ 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:03.828 * Looking for test storage... 00:13:03.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:03.828 20:00:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:05.730 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.730 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:05.730 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:05.731 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:05.731 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:05.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:13:05.731 00:13:05.731 --- 10.0.0.2 ping statistics --- 00:13:05.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.731 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:05.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:13:05.731 00:13:05.731 --- 10.0.0.1 ping statistics --- 00:13:05.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.731 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3133518 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3133518 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3133518 ']' 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.731 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.731 [2024-07-13 20:00:53.232589] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:05.731 [2024-07-13 20:00:53.232673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.731 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.731 [2024-07-13 20:00:53.301503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.989 [2024-07-13 20:00:53.397414] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.989 [2024-07-13 20:00:53.397473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.989 [2024-07-13 20:00:53.397490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.989 [2024-07-13 20:00:53.397505] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.989 [2024-07-13 20:00:53.397516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.989 [2024-07-13 20:00:53.397610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.989 [2024-07-13 20:00:53.397644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.989 [2024-07-13 20:00:53.397647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 [2024-07-13 20:00:53.544649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 Malloc0 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 Delay0 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 [2024-07-13 20:00:53.615295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.989 20:00:53 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:06.247 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.247 [2024-07-13 20:00:53.722055] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:08.770 Initializing NVMe Controllers 00:13:08.770 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:08.770 controller IO queue size 128 less than required 00:13:08.770 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:08.770 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:08.770 Initialization complete. Launching workers. 00:13:08.770 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34278 00:13:08.770 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34339, failed to submit 62 00:13:08.770 success 34282, unsuccess 57, failed 0 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.770 rmmod nvme_tcp 00:13:08.770 rmmod nvme_fabrics 00:13:08.770 rmmod nvme_keyring 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3133518 ']' 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3133518 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3133518 ']' 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3133518 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3133518 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3133518' 00:13:08.770 killing process with pid 3133518 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3133518 00:13:08.770 20:00:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3133518 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.770 20:00:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.692 20:00:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:10.692 00:13:10.692 real 0m7.373s 00:13:10.692 user 0m10.925s 00:13:10.692 sys 0m2.475s 00:13:10.692 20:00:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:10.692 20:00:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 ************************************ 00:13:10.692 END TEST nvmf_abort 00:13:10.692 ************************************ 00:13:10.692 20:00:58 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:10.692 20:00:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:10.692 20:00:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:10.692 20:00:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.692 ************************************ 00:13:10.692 START TEST nvmf_ns_hotplug_stress 00:13:10.692 ************************************ 00:13:10.692 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:10.957 * Looking for test storage... 00:13:10.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.957 20:00:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.854 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:12.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:12.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:12.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:12.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.855 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.112 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.112 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:13:13.112 00:13:13.112 --- 10.0.0.2 ping statistics --- 00:13:13.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.112 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:13:13.112 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:13:13.112 00:13:13.112 --- 10.0.0.1 ping statistics --- 00:13:13.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.112 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:13.112 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.112 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:13.112 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3135857 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3135857 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3135857 ']' 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:13.113 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.113 [2024-07-13 20:01:00.611482] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:13.113 [2024-07-13 20:01:00.611586] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.113 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.113 [2024-07-13 20:01:00.684948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.371 [2024-07-13 20:01:00.779294] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.371 [2024-07-13 20:01:00.779348] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.371 [2024-07-13 20:01:00.779364] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.371 [2024-07-13 20:01:00.779377] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.371 [2024-07-13 20:01:00.779389] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.371 [2024-07-13 20:01:00.779473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.371 [2024-07-13 20:01:00.779510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.371 [2024-07-13 20:01:00.779513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:13.371 20:01:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:13.629 [2024-07-13 20:01:01.164007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.629 20:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:13.887 20:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.144 [2024-07-13 20:01:01.702932] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.144 20:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.400 20:01:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:14.658 Malloc0 00:13:14.658 20:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:14.916 Delay0 00:13:14.916 20:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.173 20:01:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:15.428 NULL1 00:13:15.428 20:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:15.686 20:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3136152 00:13:15.686 20:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:15.686 20:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:15.686 20:01:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.943 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.875 Read completed with error (sct=0, sc=11) 00:13:16.875 20:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.132 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:17.389 20:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:17.389 20:01:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:17.646 true 00:13:17.647 20:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:17.647 20:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.210 20:01:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.500 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:18.500 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:18.757 true 00:13:18.757 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:18.757 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.014 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.272 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:19.272 20:01:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:19.529 true 00:13:19.529 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:19.529 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.786 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.043 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:20.043 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:20.301 true 00:13:20.301 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:20.301 20:01:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.671 20:01:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:21.671 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:21.671 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:21.928 true 00:13:21.928 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:21.928 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.185 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.441 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:22.441 20:01:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:22.699 true 00:13:22.699 20:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:22.699 20:01:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.637 20:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:23.898 20:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:23.898 20:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:24.155 true 00:13:24.155 20:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:24.155 20:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.412 20:01:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.669 20:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:24.669 20:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:24.925 true 00:13:24.925 20:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:24.925 20:01:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.861 20:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:26.123 20:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:26.123 20:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:26.379 true 00:13:26.379 20:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:26.379 20:01:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.636 20:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.894 20:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:26.894 20:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:26.894 true 00:13:27.151 20:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:27.151 20:01:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.082 20:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.082 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:28.343 20:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:28.343 20:01:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:28.600 true 00:13:28.600 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:28.600 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.857 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.114 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:29.114 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:29.371 true 00:13:29.371 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:29.371 20:01:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.628 20:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.886 20:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:29.886 20:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:30.144 true 00:13:30.144 20:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:30.144 20:01:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.075 20:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:31.332 20:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:31.332 20:01:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:31.588 true 00:13:31.588 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:31.588 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.850 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.151 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:32.151 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:32.408 true 00:13:32.408 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:32.408 20:01:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.337 20:01:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.337 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.593 20:01:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:33.593 20:01:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:33.851 true 00:13:33.851 20:01:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:33.851 20:01:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.780 20:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.037 20:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:35.037 20:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:35.295 true 00:13:35.295 20:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:35.295 20:01:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.551 20:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.809 20:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:35.809 20:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:36.067 true 00:13:36.067 20:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:36.067 20:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.324 20:01:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.581 20:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:36.581 20:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:36.838 true 00:13:36.838 20:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:36.838 20:01:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.771 20:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.028 20:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:38.028 20:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:38.284 true 00:13:38.284 20:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:38.284 20:01:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.541 20:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.797 20:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:38.797 20:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:39.054 true 00:13:39.054 20:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:39.055 20:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.312 20:01:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.580 20:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:39.580 20:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:39.838 true 00:13:39.838 20:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:39.838 20:01:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.771 20:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.771 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:41.029 20:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:41.029 20:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:41.285 true 00:13:41.285 20:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:41.286 20:01:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.216 20:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.473 20:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:42.473 20:01:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:42.473 true 00:13:42.473 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:42.473 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.731 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.988 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:42.988 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:43.245 true 00:13:43.245 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:43.245 20:01:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.175 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.175 20:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.433 20:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:44.433 20:01:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:44.693 true 00:13:44.693 20:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:44.693 20:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.949 20:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.206 20:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:45.207 20:01:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:45.469 true 00:13:45.469 20:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:45.469 20:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.444 Initializing NVMe Controllers 00:13:46.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.444 Controller IO queue size 128, less than required. 00:13:46.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:46.444 Controller IO queue size 128, less than required. 00:13:46.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:46.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:46.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:46.444 Initialization complete. Launching workers. 00:13:46.444 ======================================================== 00:13:46.445 Latency(us) 00:13:46.445 Device Information : IOPS MiB/s Average min max 00:13:46.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1316.45 0.64 52174.79 2313.90 1062201.65 00:13:46.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11292.39 5.51 11335.66 2966.37 446605.80 00:13:46.445 ======================================================== 00:13:46.445 Total : 12608.84 6.16 15599.55 2313.90 1062201.65 00:13:46.445 00:13:46.445 20:01:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.702 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:46.702 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:46.959 true 00:13:46.960 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3136152 00:13:46.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3136152) - No such process 00:13:46.960 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3136152 00:13:46.960 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.217 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.474 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:47.474 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:47.474 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:47.474 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.474 20:01:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:47.731 null0 00:13:47.731 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.731 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.731 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:47.989 null1 00:13:47.989 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:47.989 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.989 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:48.246 null2 00:13:48.246 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.246 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.246 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:48.502 null3 00:13:48.502 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.502 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.502 20:01:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:48.759 null4 00:13:48.759 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.759 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.759 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:49.015 null5 00:13:49.015 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.015 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.015 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:49.272 null6 00:13:49.272 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.272 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.272 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:49.529 null7 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.529 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3140206 3140207 3140209 3140211 3140213 3140215 3140217 3140219 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.530 20:01:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:49.787 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.044 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.044 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.044 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.044 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.045 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.303 20:01:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.561 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.818 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.075 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.333 20:01:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.591 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.591 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.591 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.592 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.850 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.109 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.370 20:01:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.628 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.886 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.144 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.145 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.145 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.145 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.145 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.145 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.145 20:01:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.403 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.403 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.403 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.403 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.403 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.403 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.661 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.661 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.661 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.661 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.661 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.661 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.662 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.662 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.919 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.176 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.433 20:01:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.690 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.948 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.948 rmmod nvme_tcp 00:13:54.948 rmmod nvme_fabrics 00:13:54.948 rmmod nvme_keyring 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3135857 ']' 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3135857 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3135857 ']' 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3135857 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3135857 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3135857' 00:13:54.949 killing process with pid 3135857 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3135857 00:13:54.949 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3135857 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:55.207 20:01:42 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.735 20:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.735 00:13:57.735 real 0m46.458s 00:13:57.735 user 3m31.507s 00:13:57.735 sys 0m16.175s 00:13:57.735 20:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:57.735 20:01:44 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.735 ************************************ 00:13:57.735 END TEST nvmf_ns_hotplug_stress 00:13:57.735 ************************************ 00:13:57.735 20:01:44 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:57.735 20:01:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:57.735 20:01:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.735 20:01:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.735 ************************************ 00:13:57.735 START TEST nvmf_connect_stress 00:13:57.735 ************************************ 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:57.735 * Looking for test storage... 00:13:57.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.735 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.736 20:01:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.663 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:59.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:59.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:59.664 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:59.664 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:59.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:13:59.664 00:13:59.664 --- 10.0.0.2 ping statistics --- 00:13:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.664 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:13:59.664 00:13:59.664 --- 10.0.0.1 ping statistics --- 00:13:59.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.664 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3142957 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3142957 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 3142957 ']' 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.664 20:01:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.664 [2024-07-13 20:01:47.031380] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:13:59.664 [2024-07-13 20:01:47.031455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.664 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.664 [2024-07-13 20:01:47.099747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:59.664 [2024-07-13 20:01:47.190000] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.664 [2024-07-13 20:01:47.190064] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.664 [2024-07-13 20:01:47.190081] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.664 [2024-07-13 20:01:47.190094] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.664 [2024-07-13 20:01:47.190106] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.664 [2024-07-13 20:01:47.190200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.664 [2024-07-13 20:01:47.190245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.664 [2024-07-13 20:01:47.190248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.664 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:59.664 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:13:59.664 20:01:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:59.664 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.664 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.923 [2024-07-13 20:01:47.328211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.923 [2024-07-13 20:01:47.359034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.923 NULL1 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3143039 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.923 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.924 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.199 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.199 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:00.199 20:01:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.199 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.199 20:01:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.466 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.466 20:01:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:00.466 20:01:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.466 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.466 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.030 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.030 20:01:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:01.030 20:01:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.030 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.030 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.287 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.287 20:01:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:01.287 20:01:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.287 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.287 20:01:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.544 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.544 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:01.544 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.544 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.544 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.800 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.800 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:01.800 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.800 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.800 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.057 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.057 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:02.057 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.057 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.057 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.621 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.621 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:02.621 20:01:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.621 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.621 20:01:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.877 20:01:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:02.877 20:01:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.877 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.877 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.133 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.133 20:01:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:03.133 20:01:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.133 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.133 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.390 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.390 20:01:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:03.390 20:01:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.390 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.390 20:01:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.646 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.646 20:01:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:03.646 20:01:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.646 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.646 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.208 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.208 20:01:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:04.208 20:01:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.208 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.208 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.464 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.464 20:01:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:04.464 20:01:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.464 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.464 20:01:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.721 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.721 20:01:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:04.721 20:01:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.721 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.721 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.978 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.978 20:01:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:04.978 20:01:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.978 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.978 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.237 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.237 20:01:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:05.237 20:01:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.237 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.237 20:01:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.800 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.800 20:01:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:05.800 20:01:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.800 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.800 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.055 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.056 20:01:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:06.056 20:01:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.056 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.056 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.313 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.313 20:01:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:06.313 20:01:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.313 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.313 20:01:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.570 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.570 20:01:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:06.570 20:01:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.570 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.570 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.135 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.135 20:01:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:07.135 20:01:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.135 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.135 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.392 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.392 20:01:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:07.392 20:01:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.392 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.392 20:01:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.650 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.650 20:01:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:07.650 20:01:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.650 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.650 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.908 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.908 20:01:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:07.908 20:01:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.908 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.908 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.166 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.166 20:01:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:08.166 20:01:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.166 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.166 20:01:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.729 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.729 20:01:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:08.729 20:01:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.729 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.729 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.986 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.986 20:01:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:08.986 20:01:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.986 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.986 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.243 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.243 20:01:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:09.243 20:01:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.243 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.243 20:01:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.501 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.501 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:09.501 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.501 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.501 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.757 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.758 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:09.758 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.758 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.758 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.015 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3143039 00:14:10.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3143039) - No such process 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3143039 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.272 rmmod nvme_tcp 00:14:10.272 rmmod nvme_fabrics 00:14:10.272 rmmod nvme_keyring 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3142957 ']' 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3142957 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 3142957 ']' 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 3142957 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3142957 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3142957' 00:14:10.272 killing process with pid 3142957 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 3142957 00:14:10.272 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 3142957 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.531 20:01:57 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.453 20:02:00 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:12.453 00:14:12.453 real 0m15.186s 00:14:12.453 user 0m38.061s 00:14:12.453 sys 0m5.988s 00:14:12.453 20:02:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:12.453 20:02:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:12.453 ************************************ 00:14:12.453 END TEST nvmf_connect_stress 00:14:12.453 ************************************ 00:14:12.453 20:02:00 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:12.453 20:02:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:12.453 20:02:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.453 20:02:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.453 ************************************ 00:14:12.453 START TEST nvmf_fused_ordering 00:14:12.453 ************************************ 00:14:12.453 20:02:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:12.711 * Looking for test storage... 00:14:12.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:12.711 20:02:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:14.662 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:14.662 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:14.662 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:14.663 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:14.663 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:14.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:14:14.663 00:14:14.663 --- 10.0.0.2 ping statistics --- 00:14:14.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.663 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:14.663 00:14:14.663 --- 10.0.0.1 ping statistics --- 00:14:14.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.663 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3146246 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3146246 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 3146246 ']' 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.663 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:14.663 [2024-07-13 20:02:02.316504] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:14.663 [2024-07-13 20:02:02.316591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.920 EAL: No free 2048 kB hugepages reported on node 1 00:14:14.920 [2024-07-13 20:02:02.385289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.921 [2024-07-13 20:02:02.474208] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.921 [2024-07-13 20:02:02.474274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.921 [2024-07-13 20:02:02.474291] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.921 [2024-07-13 20:02:02.474305] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.921 [2024-07-13 20:02:02.474318] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.921 [2024-07-13 20:02:02.474348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 [2024-07-13 20:02:02.618706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 [2024-07-13 20:02:02.634942] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 NULL1 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.179 20:02:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:15.179 [2024-07-13 20:02:02.678764] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:15.179 [2024-07-13 20:02:02.678806] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146268 ] 00:14:15.179 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.742 Attached to nqn.2016-06.io.spdk:cnode1 00:14:15.742 Namespace ID: 1 size: 1GB 00:14:15.742 fused_ordering(0) 00:14:15.742 fused_ordering(1) 00:14:15.742 fused_ordering(2) 00:14:15.742 fused_ordering(3) 00:14:15.742 fused_ordering(4) 00:14:15.742 fused_ordering(5) 00:14:15.742 fused_ordering(6) 00:14:15.742 fused_ordering(7) 00:14:15.742 fused_ordering(8) 00:14:15.742 fused_ordering(9) 00:14:15.742 fused_ordering(10) 00:14:15.742 fused_ordering(11) 00:14:15.742 fused_ordering(12) 00:14:15.742 fused_ordering(13) 00:14:15.742 fused_ordering(14) 00:14:15.742 fused_ordering(15) 00:14:15.742 fused_ordering(16) 00:14:15.742 fused_ordering(17) 00:14:15.742 fused_ordering(18) 00:14:15.742 fused_ordering(19) 00:14:15.742 fused_ordering(20) 00:14:15.742 fused_ordering(21) 00:14:15.742 fused_ordering(22) 00:14:15.742 fused_ordering(23) 00:14:15.742 fused_ordering(24) 00:14:15.742 fused_ordering(25) 00:14:15.742 fused_ordering(26) 00:14:15.742 fused_ordering(27) 00:14:15.742 fused_ordering(28) 00:14:15.742 fused_ordering(29) 00:14:15.742 fused_ordering(30) 00:14:15.742 fused_ordering(31) 00:14:15.742 fused_ordering(32) 00:14:15.742 fused_ordering(33) 00:14:15.742 fused_ordering(34) 00:14:15.742 fused_ordering(35) 00:14:15.742 fused_ordering(36) 00:14:15.742 fused_ordering(37) 00:14:15.742 fused_ordering(38) 00:14:15.742 fused_ordering(39) 00:14:15.743 fused_ordering(40) 00:14:15.743 fused_ordering(41) 00:14:15.743 fused_ordering(42) 00:14:15.743 fused_ordering(43) 00:14:15.743 fused_ordering(44) 00:14:15.743 fused_ordering(45) 00:14:15.743 fused_ordering(46) 00:14:15.743 fused_ordering(47) 00:14:15.743 fused_ordering(48) 00:14:15.743 fused_ordering(49) 00:14:15.743 fused_ordering(50) 00:14:15.743 fused_ordering(51) 00:14:15.743 fused_ordering(52) 00:14:15.743 fused_ordering(53) 00:14:15.743 fused_ordering(54) 00:14:15.743 fused_ordering(55) 00:14:15.743 fused_ordering(56) 00:14:15.743 fused_ordering(57) 00:14:15.743 fused_ordering(58) 00:14:15.743 fused_ordering(59) 00:14:15.743 fused_ordering(60) 00:14:15.743 fused_ordering(61) 00:14:15.743 fused_ordering(62) 00:14:15.743 fused_ordering(63) 00:14:15.743 fused_ordering(64) 00:14:15.743 fused_ordering(65) 00:14:15.743 fused_ordering(66) 00:14:15.743 fused_ordering(67) 00:14:15.743 fused_ordering(68) 00:14:15.743 fused_ordering(69) 00:14:15.743 fused_ordering(70) 00:14:15.743 fused_ordering(71) 00:14:15.743 fused_ordering(72) 00:14:15.743 fused_ordering(73) 00:14:15.743 fused_ordering(74) 00:14:15.743 fused_ordering(75) 00:14:15.743 fused_ordering(76) 00:14:15.743 fused_ordering(77) 00:14:15.743 fused_ordering(78) 00:14:15.743 fused_ordering(79) 00:14:15.743 fused_ordering(80) 00:14:15.743 fused_ordering(81) 00:14:15.743 fused_ordering(82) 00:14:15.743 fused_ordering(83) 00:14:15.743 fused_ordering(84) 00:14:15.743 fused_ordering(85) 00:14:15.743 fused_ordering(86) 00:14:15.743 fused_ordering(87) 00:14:15.743 fused_ordering(88) 00:14:15.743 fused_ordering(89) 00:14:15.743 fused_ordering(90) 00:14:15.743 fused_ordering(91) 00:14:15.743 fused_ordering(92) 00:14:15.743 fused_ordering(93) 00:14:15.743 fused_ordering(94) 00:14:15.743 fused_ordering(95) 00:14:15.743 fused_ordering(96) 00:14:15.743 fused_ordering(97) 00:14:15.743 fused_ordering(98) 00:14:15.743 fused_ordering(99) 00:14:15.743 fused_ordering(100) 00:14:15.743 fused_ordering(101) 00:14:15.743 fused_ordering(102) 00:14:15.743 fused_ordering(103) 00:14:15.743 fused_ordering(104) 00:14:15.743 fused_ordering(105) 00:14:15.743 fused_ordering(106) 00:14:15.743 fused_ordering(107) 00:14:15.743 fused_ordering(108) 00:14:15.743 fused_ordering(109) 00:14:15.743 fused_ordering(110) 00:14:15.743 fused_ordering(111) 00:14:15.743 fused_ordering(112) 00:14:15.743 fused_ordering(113) 00:14:15.743 fused_ordering(114) 00:14:15.743 fused_ordering(115) 00:14:15.743 fused_ordering(116) 00:14:15.743 fused_ordering(117) 00:14:15.743 fused_ordering(118) 00:14:15.743 fused_ordering(119) 00:14:15.743 fused_ordering(120) 00:14:15.743 fused_ordering(121) 00:14:15.743 fused_ordering(122) 00:14:15.743 fused_ordering(123) 00:14:15.743 fused_ordering(124) 00:14:15.743 fused_ordering(125) 00:14:15.743 fused_ordering(126) 00:14:15.743 fused_ordering(127) 00:14:15.743 fused_ordering(128) 00:14:15.743 fused_ordering(129) 00:14:15.743 fused_ordering(130) 00:14:15.743 fused_ordering(131) 00:14:15.743 fused_ordering(132) 00:14:15.743 fused_ordering(133) 00:14:15.743 fused_ordering(134) 00:14:15.743 fused_ordering(135) 00:14:15.743 fused_ordering(136) 00:14:15.743 fused_ordering(137) 00:14:15.743 fused_ordering(138) 00:14:15.743 fused_ordering(139) 00:14:15.743 fused_ordering(140) 00:14:15.743 fused_ordering(141) 00:14:15.743 fused_ordering(142) 00:14:15.743 fused_ordering(143) 00:14:15.743 fused_ordering(144) 00:14:15.743 fused_ordering(145) 00:14:15.743 fused_ordering(146) 00:14:15.743 fused_ordering(147) 00:14:15.743 fused_ordering(148) 00:14:15.743 fused_ordering(149) 00:14:15.743 fused_ordering(150) 00:14:15.743 fused_ordering(151) 00:14:15.743 fused_ordering(152) 00:14:15.743 fused_ordering(153) 00:14:15.743 fused_ordering(154) 00:14:15.743 fused_ordering(155) 00:14:15.743 fused_ordering(156) 00:14:15.743 fused_ordering(157) 00:14:15.743 fused_ordering(158) 00:14:15.743 fused_ordering(159) 00:14:15.743 fused_ordering(160) 00:14:15.743 fused_ordering(161) 00:14:15.743 fused_ordering(162) 00:14:15.743 fused_ordering(163) 00:14:15.743 fused_ordering(164) 00:14:15.743 fused_ordering(165) 00:14:15.743 fused_ordering(166) 00:14:15.743 fused_ordering(167) 00:14:15.743 fused_ordering(168) 00:14:15.743 fused_ordering(169) 00:14:15.743 fused_ordering(170) 00:14:15.743 fused_ordering(171) 00:14:15.743 fused_ordering(172) 00:14:15.743 fused_ordering(173) 00:14:15.743 fused_ordering(174) 00:14:15.743 fused_ordering(175) 00:14:15.743 fused_ordering(176) 00:14:15.743 fused_ordering(177) 00:14:15.743 fused_ordering(178) 00:14:15.743 fused_ordering(179) 00:14:15.743 fused_ordering(180) 00:14:15.743 fused_ordering(181) 00:14:15.743 fused_ordering(182) 00:14:15.743 fused_ordering(183) 00:14:15.743 fused_ordering(184) 00:14:15.743 fused_ordering(185) 00:14:15.743 fused_ordering(186) 00:14:15.743 fused_ordering(187) 00:14:15.743 fused_ordering(188) 00:14:15.743 fused_ordering(189) 00:14:15.743 fused_ordering(190) 00:14:15.743 fused_ordering(191) 00:14:15.743 fused_ordering(192) 00:14:15.743 fused_ordering(193) 00:14:15.743 fused_ordering(194) 00:14:15.743 fused_ordering(195) 00:14:15.743 fused_ordering(196) 00:14:15.743 fused_ordering(197) 00:14:15.743 fused_ordering(198) 00:14:15.743 fused_ordering(199) 00:14:15.743 fused_ordering(200) 00:14:15.743 fused_ordering(201) 00:14:15.743 fused_ordering(202) 00:14:15.743 fused_ordering(203) 00:14:15.743 fused_ordering(204) 00:14:15.743 fused_ordering(205) 00:14:16.308 fused_ordering(206) 00:14:16.308 fused_ordering(207) 00:14:16.308 fused_ordering(208) 00:14:16.308 fused_ordering(209) 00:14:16.308 fused_ordering(210) 00:14:16.308 fused_ordering(211) 00:14:16.308 fused_ordering(212) 00:14:16.308 fused_ordering(213) 00:14:16.308 fused_ordering(214) 00:14:16.308 fused_ordering(215) 00:14:16.308 fused_ordering(216) 00:14:16.308 fused_ordering(217) 00:14:16.308 fused_ordering(218) 00:14:16.308 fused_ordering(219) 00:14:16.308 fused_ordering(220) 00:14:16.308 fused_ordering(221) 00:14:16.308 fused_ordering(222) 00:14:16.308 fused_ordering(223) 00:14:16.308 fused_ordering(224) 00:14:16.308 fused_ordering(225) 00:14:16.308 fused_ordering(226) 00:14:16.308 fused_ordering(227) 00:14:16.308 fused_ordering(228) 00:14:16.308 fused_ordering(229) 00:14:16.308 fused_ordering(230) 00:14:16.308 fused_ordering(231) 00:14:16.308 fused_ordering(232) 00:14:16.308 fused_ordering(233) 00:14:16.308 fused_ordering(234) 00:14:16.308 fused_ordering(235) 00:14:16.308 fused_ordering(236) 00:14:16.308 fused_ordering(237) 00:14:16.308 fused_ordering(238) 00:14:16.308 fused_ordering(239) 00:14:16.308 fused_ordering(240) 00:14:16.308 fused_ordering(241) 00:14:16.308 fused_ordering(242) 00:14:16.308 fused_ordering(243) 00:14:16.308 fused_ordering(244) 00:14:16.308 fused_ordering(245) 00:14:16.308 fused_ordering(246) 00:14:16.308 fused_ordering(247) 00:14:16.308 fused_ordering(248) 00:14:16.308 fused_ordering(249) 00:14:16.308 fused_ordering(250) 00:14:16.308 fused_ordering(251) 00:14:16.308 fused_ordering(252) 00:14:16.308 fused_ordering(253) 00:14:16.308 fused_ordering(254) 00:14:16.308 fused_ordering(255) 00:14:16.308 fused_ordering(256) 00:14:16.308 fused_ordering(257) 00:14:16.308 fused_ordering(258) 00:14:16.308 fused_ordering(259) 00:14:16.308 fused_ordering(260) 00:14:16.308 fused_ordering(261) 00:14:16.308 fused_ordering(262) 00:14:16.308 fused_ordering(263) 00:14:16.308 fused_ordering(264) 00:14:16.308 fused_ordering(265) 00:14:16.308 fused_ordering(266) 00:14:16.308 fused_ordering(267) 00:14:16.308 fused_ordering(268) 00:14:16.308 fused_ordering(269) 00:14:16.308 fused_ordering(270) 00:14:16.308 fused_ordering(271) 00:14:16.308 fused_ordering(272) 00:14:16.308 fused_ordering(273) 00:14:16.308 fused_ordering(274) 00:14:16.308 fused_ordering(275) 00:14:16.308 fused_ordering(276) 00:14:16.308 fused_ordering(277) 00:14:16.308 fused_ordering(278) 00:14:16.308 fused_ordering(279) 00:14:16.308 fused_ordering(280) 00:14:16.308 fused_ordering(281) 00:14:16.308 fused_ordering(282) 00:14:16.308 fused_ordering(283) 00:14:16.308 fused_ordering(284) 00:14:16.308 fused_ordering(285) 00:14:16.308 fused_ordering(286) 00:14:16.308 fused_ordering(287) 00:14:16.308 fused_ordering(288) 00:14:16.308 fused_ordering(289) 00:14:16.308 fused_ordering(290) 00:14:16.308 fused_ordering(291) 00:14:16.308 fused_ordering(292) 00:14:16.308 fused_ordering(293) 00:14:16.308 fused_ordering(294) 00:14:16.308 fused_ordering(295) 00:14:16.308 fused_ordering(296) 00:14:16.308 fused_ordering(297) 00:14:16.308 fused_ordering(298) 00:14:16.308 fused_ordering(299) 00:14:16.308 fused_ordering(300) 00:14:16.308 fused_ordering(301) 00:14:16.308 fused_ordering(302) 00:14:16.308 fused_ordering(303) 00:14:16.308 fused_ordering(304) 00:14:16.308 fused_ordering(305) 00:14:16.308 fused_ordering(306) 00:14:16.308 fused_ordering(307) 00:14:16.308 fused_ordering(308) 00:14:16.308 fused_ordering(309) 00:14:16.308 fused_ordering(310) 00:14:16.308 fused_ordering(311) 00:14:16.308 fused_ordering(312) 00:14:16.308 fused_ordering(313) 00:14:16.308 fused_ordering(314) 00:14:16.308 fused_ordering(315) 00:14:16.308 fused_ordering(316) 00:14:16.308 fused_ordering(317) 00:14:16.308 fused_ordering(318) 00:14:16.308 fused_ordering(319) 00:14:16.308 fused_ordering(320) 00:14:16.308 fused_ordering(321) 00:14:16.308 fused_ordering(322) 00:14:16.308 fused_ordering(323) 00:14:16.308 fused_ordering(324) 00:14:16.308 fused_ordering(325) 00:14:16.308 fused_ordering(326) 00:14:16.309 fused_ordering(327) 00:14:16.309 fused_ordering(328) 00:14:16.309 fused_ordering(329) 00:14:16.309 fused_ordering(330) 00:14:16.309 fused_ordering(331) 00:14:16.309 fused_ordering(332) 00:14:16.309 fused_ordering(333) 00:14:16.309 fused_ordering(334) 00:14:16.309 fused_ordering(335) 00:14:16.309 fused_ordering(336) 00:14:16.309 fused_ordering(337) 00:14:16.309 fused_ordering(338) 00:14:16.309 fused_ordering(339) 00:14:16.309 fused_ordering(340) 00:14:16.309 fused_ordering(341) 00:14:16.309 fused_ordering(342) 00:14:16.309 fused_ordering(343) 00:14:16.309 fused_ordering(344) 00:14:16.309 fused_ordering(345) 00:14:16.309 fused_ordering(346) 00:14:16.309 fused_ordering(347) 00:14:16.309 fused_ordering(348) 00:14:16.309 fused_ordering(349) 00:14:16.309 fused_ordering(350) 00:14:16.309 fused_ordering(351) 00:14:16.309 fused_ordering(352) 00:14:16.309 fused_ordering(353) 00:14:16.309 fused_ordering(354) 00:14:16.309 fused_ordering(355) 00:14:16.309 fused_ordering(356) 00:14:16.309 fused_ordering(357) 00:14:16.309 fused_ordering(358) 00:14:16.309 fused_ordering(359) 00:14:16.309 fused_ordering(360) 00:14:16.309 fused_ordering(361) 00:14:16.309 fused_ordering(362) 00:14:16.309 fused_ordering(363) 00:14:16.309 fused_ordering(364) 00:14:16.309 fused_ordering(365) 00:14:16.309 fused_ordering(366) 00:14:16.309 fused_ordering(367) 00:14:16.309 fused_ordering(368) 00:14:16.309 fused_ordering(369) 00:14:16.309 fused_ordering(370) 00:14:16.309 fused_ordering(371) 00:14:16.309 fused_ordering(372) 00:14:16.309 fused_ordering(373) 00:14:16.309 fused_ordering(374) 00:14:16.309 fused_ordering(375) 00:14:16.309 fused_ordering(376) 00:14:16.309 fused_ordering(377) 00:14:16.309 fused_ordering(378) 00:14:16.309 fused_ordering(379) 00:14:16.309 fused_ordering(380) 00:14:16.309 fused_ordering(381) 00:14:16.309 fused_ordering(382) 00:14:16.309 fused_ordering(383) 00:14:16.309 fused_ordering(384) 00:14:16.309 fused_ordering(385) 00:14:16.309 fused_ordering(386) 00:14:16.309 fused_ordering(387) 00:14:16.309 fused_ordering(388) 00:14:16.309 fused_ordering(389) 00:14:16.309 fused_ordering(390) 00:14:16.309 fused_ordering(391) 00:14:16.309 fused_ordering(392) 00:14:16.309 fused_ordering(393) 00:14:16.309 fused_ordering(394) 00:14:16.309 fused_ordering(395) 00:14:16.309 fused_ordering(396) 00:14:16.309 fused_ordering(397) 00:14:16.309 fused_ordering(398) 00:14:16.309 fused_ordering(399) 00:14:16.309 fused_ordering(400) 00:14:16.309 fused_ordering(401) 00:14:16.309 fused_ordering(402) 00:14:16.309 fused_ordering(403) 00:14:16.309 fused_ordering(404) 00:14:16.309 fused_ordering(405) 00:14:16.309 fused_ordering(406) 00:14:16.309 fused_ordering(407) 00:14:16.309 fused_ordering(408) 00:14:16.309 fused_ordering(409) 00:14:16.309 fused_ordering(410) 00:14:16.875 fused_ordering(411) 00:14:16.875 fused_ordering(412) 00:14:16.875 fused_ordering(413) 00:14:16.875 fused_ordering(414) 00:14:16.875 fused_ordering(415) 00:14:16.875 fused_ordering(416) 00:14:16.875 fused_ordering(417) 00:14:16.875 fused_ordering(418) 00:14:16.875 fused_ordering(419) 00:14:16.875 fused_ordering(420) 00:14:16.875 fused_ordering(421) 00:14:16.875 fused_ordering(422) 00:14:16.875 fused_ordering(423) 00:14:16.875 fused_ordering(424) 00:14:16.875 fused_ordering(425) 00:14:16.875 fused_ordering(426) 00:14:16.875 fused_ordering(427) 00:14:16.875 fused_ordering(428) 00:14:16.875 fused_ordering(429) 00:14:16.875 fused_ordering(430) 00:14:16.875 fused_ordering(431) 00:14:16.875 fused_ordering(432) 00:14:16.875 fused_ordering(433) 00:14:16.875 fused_ordering(434) 00:14:16.875 fused_ordering(435) 00:14:16.875 fused_ordering(436) 00:14:16.875 fused_ordering(437) 00:14:16.875 fused_ordering(438) 00:14:16.875 fused_ordering(439) 00:14:16.875 fused_ordering(440) 00:14:16.875 fused_ordering(441) 00:14:16.875 fused_ordering(442) 00:14:16.875 fused_ordering(443) 00:14:16.875 fused_ordering(444) 00:14:16.875 fused_ordering(445) 00:14:16.875 fused_ordering(446) 00:14:16.875 fused_ordering(447) 00:14:16.875 fused_ordering(448) 00:14:16.875 fused_ordering(449) 00:14:16.875 fused_ordering(450) 00:14:16.875 fused_ordering(451) 00:14:16.875 fused_ordering(452) 00:14:16.875 fused_ordering(453) 00:14:16.875 fused_ordering(454) 00:14:16.875 fused_ordering(455) 00:14:16.875 fused_ordering(456) 00:14:16.875 fused_ordering(457) 00:14:16.875 fused_ordering(458) 00:14:16.875 fused_ordering(459) 00:14:16.875 fused_ordering(460) 00:14:16.875 fused_ordering(461) 00:14:16.875 fused_ordering(462) 00:14:16.875 fused_ordering(463) 00:14:16.875 fused_ordering(464) 00:14:16.875 fused_ordering(465) 00:14:16.875 fused_ordering(466) 00:14:16.875 fused_ordering(467) 00:14:16.875 fused_ordering(468) 00:14:16.875 fused_ordering(469) 00:14:16.875 fused_ordering(470) 00:14:16.875 fused_ordering(471) 00:14:16.875 fused_ordering(472) 00:14:16.875 fused_ordering(473) 00:14:16.875 fused_ordering(474) 00:14:16.875 fused_ordering(475) 00:14:16.875 fused_ordering(476) 00:14:16.875 fused_ordering(477) 00:14:16.875 fused_ordering(478) 00:14:16.875 fused_ordering(479) 00:14:16.875 fused_ordering(480) 00:14:16.875 fused_ordering(481) 00:14:16.875 fused_ordering(482) 00:14:16.875 fused_ordering(483) 00:14:16.875 fused_ordering(484) 00:14:16.875 fused_ordering(485) 00:14:16.875 fused_ordering(486) 00:14:16.875 fused_ordering(487) 00:14:16.875 fused_ordering(488) 00:14:16.875 fused_ordering(489) 00:14:16.875 fused_ordering(490) 00:14:16.875 fused_ordering(491) 00:14:16.875 fused_ordering(492) 00:14:16.875 fused_ordering(493) 00:14:16.875 fused_ordering(494) 00:14:16.875 fused_ordering(495) 00:14:16.875 fused_ordering(496) 00:14:16.875 fused_ordering(497) 00:14:16.875 fused_ordering(498) 00:14:16.875 fused_ordering(499) 00:14:16.875 fused_ordering(500) 00:14:16.875 fused_ordering(501) 00:14:16.875 fused_ordering(502) 00:14:16.875 fused_ordering(503) 00:14:16.875 fused_ordering(504) 00:14:16.875 fused_ordering(505) 00:14:16.875 fused_ordering(506) 00:14:16.875 fused_ordering(507) 00:14:16.875 fused_ordering(508) 00:14:16.875 fused_ordering(509) 00:14:16.875 fused_ordering(510) 00:14:16.875 fused_ordering(511) 00:14:16.875 fused_ordering(512) 00:14:16.875 fused_ordering(513) 00:14:16.875 fused_ordering(514) 00:14:16.875 fused_ordering(515) 00:14:16.875 fused_ordering(516) 00:14:16.875 fused_ordering(517) 00:14:16.875 fused_ordering(518) 00:14:16.875 fused_ordering(519) 00:14:16.875 fused_ordering(520) 00:14:16.875 fused_ordering(521) 00:14:16.875 fused_ordering(522) 00:14:16.875 fused_ordering(523) 00:14:16.875 fused_ordering(524) 00:14:16.875 fused_ordering(525) 00:14:16.875 fused_ordering(526) 00:14:16.875 fused_ordering(527) 00:14:16.875 fused_ordering(528) 00:14:16.875 fused_ordering(529) 00:14:16.875 fused_ordering(530) 00:14:16.875 fused_ordering(531) 00:14:16.875 fused_ordering(532) 00:14:16.875 fused_ordering(533) 00:14:16.875 fused_ordering(534) 00:14:16.875 fused_ordering(535) 00:14:16.875 fused_ordering(536) 00:14:16.875 fused_ordering(537) 00:14:16.875 fused_ordering(538) 00:14:16.875 fused_ordering(539) 00:14:16.875 fused_ordering(540) 00:14:16.875 fused_ordering(541) 00:14:16.875 fused_ordering(542) 00:14:16.875 fused_ordering(543) 00:14:16.875 fused_ordering(544) 00:14:16.875 fused_ordering(545) 00:14:16.875 fused_ordering(546) 00:14:16.875 fused_ordering(547) 00:14:16.875 fused_ordering(548) 00:14:16.875 fused_ordering(549) 00:14:16.875 fused_ordering(550) 00:14:16.875 fused_ordering(551) 00:14:16.875 fused_ordering(552) 00:14:16.875 fused_ordering(553) 00:14:16.875 fused_ordering(554) 00:14:16.875 fused_ordering(555) 00:14:16.875 fused_ordering(556) 00:14:16.875 fused_ordering(557) 00:14:16.875 fused_ordering(558) 00:14:16.875 fused_ordering(559) 00:14:16.875 fused_ordering(560) 00:14:16.875 fused_ordering(561) 00:14:16.875 fused_ordering(562) 00:14:16.875 fused_ordering(563) 00:14:16.875 fused_ordering(564) 00:14:16.875 fused_ordering(565) 00:14:16.875 fused_ordering(566) 00:14:16.875 fused_ordering(567) 00:14:16.875 fused_ordering(568) 00:14:16.875 fused_ordering(569) 00:14:16.875 fused_ordering(570) 00:14:16.875 fused_ordering(571) 00:14:16.875 fused_ordering(572) 00:14:16.875 fused_ordering(573) 00:14:16.875 fused_ordering(574) 00:14:16.875 fused_ordering(575) 00:14:16.875 fused_ordering(576) 00:14:16.875 fused_ordering(577) 00:14:16.875 fused_ordering(578) 00:14:16.875 fused_ordering(579) 00:14:16.875 fused_ordering(580) 00:14:16.875 fused_ordering(581) 00:14:16.875 fused_ordering(582) 00:14:16.875 fused_ordering(583) 00:14:16.875 fused_ordering(584) 00:14:16.875 fused_ordering(585) 00:14:16.875 fused_ordering(586) 00:14:16.875 fused_ordering(587) 00:14:16.875 fused_ordering(588) 00:14:16.875 fused_ordering(589) 00:14:16.875 fused_ordering(590) 00:14:16.875 fused_ordering(591) 00:14:16.875 fused_ordering(592) 00:14:16.875 fused_ordering(593) 00:14:16.875 fused_ordering(594) 00:14:16.875 fused_ordering(595) 00:14:16.875 fused_ordering(596) 00:14:16.875 fused_ordering(597) 00:14:16.875 fused_ordering(598) 00:14:16.875 fused_ordering(599) 00:14:16.875 fused_ordering(600) 00:14:16.875 fused_ordering(601) 00:14:16.875 fused_ordering(602) 00:14:16.875 fused_ordering(603) 00:14:16.875 fused_ordering(604) 00:14:16.875 fused_ordering(605) 00:14:16.875 fused_ordering(606) 00:14:16.875 fused_ordering(607) 00:14:16.875 fused_ordering(608) 00:14:16.875 fused_ordering(609) 00:14:16.875 fused_ordering(610) 00:14:16.875 fused_ordering(611) 00:14:16.875 fused_ordering(612) 00:14:16.875 fused_ordering(613) 00:14:16.875 fused_ordering(614) 00:14:16.875 fused_ordering(615) 00:14:17.809 fused_ordering(616) 00:14:17.809 fused_ordering(617) 00:14:17.809 fused_ordering(618) 00:14:17.809 fused_ordering(619) 00:14:17.809 fused_ordering(620) 00:14:17.809 fused_ordering(621) 00:14:17.809 fused_ordering(622) 00:14:17.809 fused_ordering(623) 00:14:17.809 fused_ordering(624) 00:14:17.809 fused_ordering(625) 00:14:17.809 fused_ordering(626) 00:14:17.809 fused_ordering(627) 00:14:17.809 fused_ordering(628) 00:14:17.809 fused_ordering(629) 00:14:17.809 fused_ordering(630) 00:14:17.809 fused_ordering(631) 00:14:17.809 fused_ordering(632) 00:14:17.809 fused_ordering(633) 00:14:17.809 fused_ordering(634) 00:14:17.809 fused_ordering(635) 00:14:17.809 fused_ordering(636) 00:14:17.809 fused_ordering(637) 00:14:17.809 fused_ordering(638) 00:14:17.809 fused_ordering(639) 00:14:17.809 fused_ordering(640) 00:14:17.809 fused_ordering(641) 00:14:17.809 fused_ordering(642) 00:14:17.809 fused_ordering(643) 00:14:17.809 fused_ordering(644) 00:14:17.809 fused_ordering(645) 00:14:17.809 fused_ordering(646) 00:14:17.809 fused_ordering(647) 00:14:17.809 fused_ordering(648) 00:14:17.809 fused_ordering(649) 00:14:17.809 fused_ordering(650) 00:14:17.809 fused_ordering(651) 00:14:17.809 fused_ordering(652) 00:14:17.809 fused_ordering(653) 00:14:17.809 fused_ordering(654) 00:14:17.809 fused_ordering(655) 00:14:17.809 fused_ordering(656) 00:14:17.809 fused_ordering(657) 00:14:17.809 fused_ordering(658) 00:14:17.809 fused_ordering(659) 00:14:17.809 fused_ordering(660) 00:14:17.809 fused_ordering(661) 00:14:17.809 fused_ordering(662) 00:14:17.809 fused_ordering(663) 00:14:17.809 fused_ordering(664) 00:14:17.809 fused_ordering(665) 00:14:17.809 fused_ordering(666) 00:14:17.809 fused_ordering(667) 00:14:17.809 fused_ordering(668) 00:14:17.809 fused_ordering(669) 00:14:17.809 fused_ordering(670) 00:14:17.809 fused_ordering(671) 00:14:17.809 fused_ordering(672) 00:14:17.809 fused_ordering(673) 00:14:17.809 fused_ordering(674) 00:14:17.809 fused_ordering(675) 00:14:17.809 fused_ordering(676) 00:14:17.809 fused_ordering(677) 00:14:17.809 fused_ordering(678) 00:14:17.809 fused_ordering(679) 00:14:17.809 fused_ordering(680) 00:14:17.809 fused_ordering(681) 00:14:17.809 fused_ordering(682) 00:14:17.809 fused_ordering(683) 00:14:17.809 fused_ordering(684) 00:14:17.809 fused_ordering(685) 00:14:17.809 fused_ordering(686) 00:14:17.809 fused_ordering(687) 00:14:17.809 fused_ordering(688) 00:14:17.809 fused_ordering(689) 00:14:17.809 fused_ordering(690) 00:14:17.809 fused_ordering(691) 00:14:17.809 fused_ordering(692) 00:14:17.809 fused_ordering(693) 00:14:17.809 fused_ordering(694) 00:14:17.809 fused_ordering(695) 00:14:17.809 fused_ordering(696) 00:14:17.809 fused_ordering(697) 00:14:17.809 fused_ordering(698) 00:14:17.809 fused_ordering(699) 00:14:17.809 fused_ordering(700) 00:14:17.809 fused_ordering(701) 00:14:17.809 fused_ordering(702) 00:14:17.809 fused_ordering(703) 00:14:17.809 fused_ordering(704) 00:14:17.809 fused_ordering(705) 00:14:17.809 fused_ordering(706) 00:14:17.809 fused_ordering(707) 00:14:17.809 fused_ordering(708) 00:14:17.809 fused_ordering(709) 00:14:17.809 fused_ordering(710) 00:14:17.809 fused_ordering(711) 00:14:17.809 fused_ordering(712) 00:14:17.809 fused_ordering(713) 00:14:17.809 fused_ordering(714) 00:14:17.809 fused_ordering(715) 00:14:17.809 fused_ordering(716) 00:14:17.809 fused_ordering(717) 00:14:17.809 fused_ordering(718) 00:14:17.809 fused_ordering(719) 00:14:17.809 fused_ordering(720) 00:14:17.809 fused_ordering(721) 00:14:17.809 fused_ordering(722) 00:14:17.809 fused_ordering(723) 00:14:17.809 fused_ordering(724) 00:14:17.809 fused_ordering(725) 00:14:17.809 fused_ordering(726) 00:14:17.809 fused_ordering(727) 00:14:17.809 fused_ordering(728) 00:14:17.809 fused_ordering(729) 00:14:17.809 fused_ordering(730) 00:14:17.809 fused_ordering(731) 00:14:17.809 fused_ordering(732) 00:14:17.809 fused_ordering(733) 00:14:17.809 fused_ordering(734) 00:14:17.809 fused_ordering(735) 00:14:17.809 fused_ordering(736) 00:14:17.809 fused_ordering(737) 00:14:17.809 fused_ordering(738) 00:14:17.809 fused_ordering(739) 00:14:17.809 fused_ordering(740) 00:14:17.809 fused_ordering(741) 00:14:17.809 fused_ordering(742) 00:14:17.809 fused_ordering(743) 00:14:17.809 fused_ordering(744) 00:14:17.809 fused_ordering(745) 00:14:17.809 fused_ordering(746) 00:14:17.809 fused_ordering(747) 00:14:17.809 fused_ordering(748) 00:14:17.809 fused_ordering(749) 00:14:17.809 fused_ordering(750) 00:14:17.809 fused_ordering(751) 00:14:17.809 fused_ordering(752) 00:14:17.809 fused_ordering(753) 00:14:17.809 fused_ordering(754) 00:14:17.809 fused_ordering(755) 00:14:17.809 fused_ordering(756) 00:14:17.809 fused_ordering(757) 00:14:17.809 fused_ordering(758) 00:14:17.809 fused_ordering(759) 00:14:17.809 fused_ordering(760) 00:14:17.809 fused_ordering(761) 00:14:17.809 fused_ordering(762) 00:14:17.809 fused_ordering(763) 00:14:17.809 fused_ordering(764) 00:14:17.809 fused_ordering(765) 00:14:17.809 fused_ordering(766) 00:14:17.809 fused_ordering(767) 00:14:17.809 fused_ordering(768) 00:14:17.809 fused_ordering(769) 00:14:17.809 fused_ordering(770) 00:14:17.809 fused_ordering(771) 00:14:17.810 fused_ordering(772) 00:14:17.810 fused_ordering(773) 00:14:17.810 fused_ordering(774) 00:14:17.810 fused_ordering(775) 00:14:17.810 fused_ordering(776) 00:14:17.810 fused_ordering(777) 00:14:17.810 fused_ordering(778) 00:14:17.810 fused_ordering(779) 00:14:17.810 fused_ordering(780) 00:14:17.810 fused_ordering(781) 00:14:17.810 fused_ordering(782) 00:14:17.810 fused_ordering(783) 00:14:17.810 fused_ordering(784) 00:14:17.810 fused_ordering(785) 00:14:17.810 fused_ordering(786) 00:14:17.810 fused_ordering(787) 00:14:17.810 fused_ordering(788) 00:14:17.810 fused_ordering(789) 00:14:17.810 fused_ordering(790) 00:14:17.810 fused_ordering(791) 00:14:17.810 fused_ordering(792) 00:14:17.810 fused_ordering(793) 00:14:17.810 fused_ordering(794) 00:14:17.810 fused_ordering(795) 00:14:17.810 fused_ordering(796) 00:14:17.810 fused_ordering(797) 00:14:17.810 fused_ordering(798) 00:14:17.810 fused_ordering(799) 00:14:17.810 fused_ordering(800) 00:14:17.810 fused_ordering(801) 00:14:17.810 fused_ordering(802) 00:14:17.810 fused_ordering(803) 00:14:17.810 fused_ordering(804) 00:14:17.810 fused_ordering(805) 00:14:17.810 fused_ordering(806) 00:14:17.810 fused_ordering(807) 00:14:17.810 fused_ordering(808) 00:14:17.810 fused_ordering(809) 00:14:17.810 fused_ordering(810) 00:14:17.810 fused_ordering(811) 00:14:17.810 fused_ordering(812) 00:14:17.810 fused_ordering(813) 00:14:17.810 fused_ordering(814) 00:14:17.810 fused_ordering(815) 00:14:17.810 fused_ordering(816) 00:14:17.810 fused_ordering(817) 00:14:17.810 fused_ordering(818) 00:14:17.810 fused_ordering(819) 00:14:17.810 fused_ordering(820) 00:14:18.742 fused_ordering(821) 00:14:18.742 fused_ordering(822) 00:14:18.742 fused_ordering(823) 00:14:18.742 fused_ordering(824) 00:14:18.742 fused_ordering(825) 00:14:18.742 fused_ordering(826) 00:14:18.742 fused_ordering(827) 00:14:18.742 fused_ordering(828) 00:14:18.742 fused_ordering(829) 00:14:18.742 fused_ordering(830) 00:14:18.742 fused_ordering(831) 00:14:18.742 fused_ordering(832) 00:14:18.742 fused_ordering(833) 00:14:18.742 fused_ordering(834) 00:14:18.742 fused_ordering(835) 00:14:18.742 fused_ordering(836) 00:14:18.742 fused_ordering(837) 00:14:18.742 fused_ordering(838) 00:14:18.742 fused_ordering(839) 00:14:18.742 fused_ordering(840) 00:14:18.742 fused_ordering(841) 00:14:18.742 fused_ordering(842) 00:14:18.742 fused_ordering(843) 00:14:18.742 fused_ordering(844) 00:14:18.743 fused_ordering(845) 00:14:18.743 fused_ordering(846) 00:14:18.743 fused_ordering(847) 00:14:18.743 fused_ordering(848) 00:14:18.743 fused_ordering(849) 00:14:18.743 fused_ordering(850) 00:14:18.743 fused_ordering(851) 00:14:18.743 fused_ordering(852) 00:14:18.743 fused_ordering(853) 00:14:18.743 fused_ordering(854) 00:14:18.743 fused_ordering(855) 00:14:18.743 fused_ordering(856) 00:14:18.743 fused_ordering(857) 00:14:18.743 fused_ordering(858) 00:14:18.743 fused_ordering(859) 00:14:18.743 fused_ordering(860) 00:14:18.743 fused_ordering(861) 00:14:18.743 fused_ordering(862) 00:14:18.743 fused_ordering(863) 00:14:18.743 fused_ordering(864) 00:14:18.743 fused_ordering(865) 00:14:18.743 fused_ordering(866) 00:14:18.743 fused_ordering(867) 00:14:18.743 fused_ordering(868) 00:14:18.743 fused_ordering(869) 00:14:18.743 fused_ordering(870) 00:14:18.743 fused_ordering(871) 00:14:18.743 fused_ordering(872) 00:14:18.743 fused_ordering(873) 00:14:18.743 fused_ordering(874) 00:14:18.743 fused_ordering(875) 00:14:18.743 fused_ordering(876) 00:14:18.743 fused_ordering(877) 00:14:18.743 fused_ordering(878) 00:14:18.743 fused_ordering(879) 00:14:18.743 fused_ordering(880) 00:14:18.743 fused_ordering(881) 00:14:18.743 fused_ordering(882) 00:14:18.743 fused_ordering(883) 00:14:18.743 fused_ordering(884) 00:14:18.743 fused_ordering(885) 00:14:18.743 fused_ordering(886) 00:14:18.743 fused_ordering(887) 00:14:18.743 fused_ordering(888) 00:14:18.743 fused_ordering(889) 00:14:18.743 fused_ordering(890) 00:14:18.743 fused_ordering(891) 00:14:18.743 fused_ordering(892) 00:14:18.743 fused_ordering(893) 00:14:18.743 fused_ordering(894) 00:14:18.743 fused_ordering(895) 00:14:18.743 fused_ordering(896) 00:14:18.743 fused_ordering(897) 00:14:18.743 fused_ordering(898) 00:14:18.743 fused_ordering(899) 00:14:18.743 fused_ordering(900) 00:14:18.743 fused_ordering(901) 00:14:18.743 fused_ordering(902) 00:14:18.743 fused_ordering(903) 00:14:18.743 fused_ordering(904) 00:14:18.743 fused_ordering(905) 00:14:18.743 fused_ordering(906) 00:14:18.743 fused_ordering(907) 00:14:18.743 fused_ordering(908) 00:14:18.743 fused_ordering(909) 00:14:18.743 fused_ordering(910) 00:14:18.743 fused_ordering(911) 00:14:18.743 fused_ordering(912) 00:14:18.743 fused_ordering(913) 00:14:18.743 fused_ordering(914) 00:14:18.743 fused_ordering(915) 00:14:18.743 fused_ordering(916) 00:14:18.743 fused_ordering(917) 00:14:18.743 fused_ordering(918) 00:14:18.743 fused_ordering(919) 00:14:18.743 fused_ordering(920) 00:14:18.743 fused_ordering(921) 00:14:18.743 fused_ordering(922) 00:14:18.743 fused_ordering(923) 00:14:18.743 fused_ordering(924) 00:14:18.743 fused_ordering(925) 00:14:18.743 fused_ordering(926) 00:14:18.743 fused_ordering(927) 00:14:18.743 fused_ordering(928) 00:14:18.743 fused_ordering(929) 00:14:18.743 fused_ordering(930) 00:14:18.743 fused_ordering(931) 00:14:18.743 fused_ordering(932) 00:14:18.743 fused_ordering(933) 00:14:18.743 fused_ordering(934) 00:14:18.743 fused_ordering(935) 00:14:18.743 fused_ordering(936) 00:14:18.743 fused_ordering(937) 00:14:18.743 fused_ordering(938) 00:14:18.743 fused_ordering(939) 00:14:18.743 fused_ordering(940) 00:14:18.743 fused_ordering(941) 00:14:18.743 fused_ordering(942) 00:14:18.743 fused_ordering(943) 00:14:18.743 fused_ordering(944) 00:14:18.743 fused_ordering(945) 00:14:18.743 fused_ordering(946) 00:14:18.743 fused_ordering(947) 00:14:18.743 fused_ordering(948) 00:14:18.743 fused_ordering(949) 00:14:18.743 fused_ordering(950) 00:14:18.743 fused_ordering(951) 00:14:18.743 fused_ordering(952) 00:14:18.743 fused_ordering(953) 00:14:18.743 fused_ordering(954) 00:14:18.743 fused_ordering(955) 00:14:18.743 fused_ordering(956) 00:14:18.743 fused_ordering(957) 00:14:18.743 fused_ordering(958) 00:14:18.743 fused_ordering(959) 00:14:18.743 fused_ordering(960) 00:14:18.743 fused_ordering(961) 00:14:18.743 fused_ordering(962) 00:14:18.743 fused_ordering(963) 00:14:18.743 fused_ordering(964) 00:14:18.743 fused_ordering(965) 00:14:18.743 fused_ordering(966) 00:14:18.743 fused_ordering(967) 00:14:18.743 fused_ordering(968) 00:14:18.743 fused_ordering(969) 00:14:18.743 fused_ordering(970) 00:14:18.743 fused_ordering(971) 00:14:18.743 fused_ordering(972) 00:14:18.743 fused_ordering(973) 00:14:18.743 fused_ordering(974) 00:14:18.743 fused_ordering(975) 00:14:18.743 fused_ordering(976) 00:14:18.743 fused_ordering(977) 00:14:18.743 fused_ordering(978) 00:14:18.743 fused_ordering(979) 00:14:18.743 fused_ordering(980) 00:14:18.743 fused_ordering(981) 00:14:18.743 fused_ordering(982) 00:14:18.743 fused_ordering(983) 00:14:18.743 fused_ordering(984) 00:14:18.743 fused_ordering(985) 00:14:18.743 fused_ordering(986) 00:14:18.743 fused_ordering(987) 00:14:18.743 fused_ordering(988) 00:14:18.743 fused_ordering(989) 00:14:18.743 fused_ordering(990) 00:14:18.743 fused_ordering(991) 00:14:18.743 fused_ordering(992) 00:14:18.743 fused_ordering(993) 00:14:18.743 fused_ordering(994) 00:14:18.743 fused_ordering(995) 00:14:18.743 fused_ordering(996) 00:14:18.743 fused_ordering(997) 00:14:18.743 fused_ordering(998) 00:14:18.743 fused_ordering(999) 00:14:18.743 fused_ordering(1000) 00:14:18.743 fused_ordering(1001) 00:14:18.743 fused_ordering(1002) 00:14:18.743 fused_ordering(1003) 00:14:18.743 fused_ordering(1004) 00:14:18.743 fused_ordering(1005) 00:14:18.743 fused_ordering(1006) 00:14:18.743 fused_ordering(1007) 00:14:18.743 fused_ordering(1008) 00:14:18.743 fused_ordering(1009) 00:14:18.743 fused_ordering(1010) 00:14:18.743 fused_ordering(1011) 00:14:18.743 fused_ordering(1012) 00:14:18.743 fused_ordering(1013) 00:14:18.743 fused_ordering(1014) 00:14:18.743 fused_ordering(1015) 00:14:18.743 fused_ordering(1016) 00:14:18.743 fused_ordering(1017) 00:14:18.743 fused_ordering(1018) 00:14:18.743 fused_ordering(1019) 00:14:18.743 fused_ordering(1020) 00:14:18.743 fused_ordering(1021) 00:14:18.743 fused_ordering(1022) 00:14:18.743 fused_ordering(1023) 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.743 rmmod nvme_tcp 00:14:18.743 rmmod nvme_fabrics 00:14:18.743 rmmod nvme_keyring 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3146246 ']' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3146246 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 3146246 ']' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 3146246 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3146246 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3146246' 00:14:18.743 killing process with pid 3146246 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 3146246 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 3146246 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.743 20:02:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.278 20:02:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.278 00:14:21.278 real 0m8.320s 00:14:21.278 user 0m5.396s 00:14:21.278 sys 0m4.279s 00:14:21.278 20:02:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:21.278 20:02:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.278 ************************************ 00:14:21.278 END TEST nvmf_fused_ordering 00:14:21.278 ************************************ 00:14:21.278 20:02:08 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.278 20:02:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:21.278 20:02:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:21.278 20:02:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.278 ************************************ 00:14:21.278 START TEST nvmf_delete_subsystem 00:14:21.278 ************************************ 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.278 * Looking for test storage... 00:14:21.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.278 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.279 20:02:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.181 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.181 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.181 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.181 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:23.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:23.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:23.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:23.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:14:23.182 00:14:23.182 --- 10.0.0.2 ping statistics --- 00:14:23.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.182 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:14:23.182 00:14:23.182 --- 10.0.0.1 ping statistics --- 00:14:23.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.182 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3148598 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3148598 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 3148598 ']' 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.182 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.182 [2024-07-13 20:02:10.621958] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:23.183 [2024-07-13 20:02:10.622030] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.183 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.183 [2024-07-13 20:02:10.686509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.183 [2024-07-13 20:02:10.770027] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.183 [2024-07-13 20:02:10.770080] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.183 [2024-07-13 20:02:10.770108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.183 [2024-07-13 20:02:10.770120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.183 [2024-07-13 20:02:10.770130] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.183 [2024-07-13 20:02:10.770265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.183 [2024-07-13 20:02:10.770271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 [2024-07-13 20:02:10.906416] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 [2024-07-13 20:02:10.922670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 NULL1 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 Delay0 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3148620 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:23.441 20:02:10 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:23.441 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.441 [2024-07-13 20:02:10.997338] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:25.343 20:02:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.343 20:02:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.343 20:02:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 starting I/O failed: -6 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 [2024-07-13 20:02:13.086353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd59180 is same with the state(5) to be set 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Write completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.600 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 [2024-07-13 20:02:13.088273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58a4000c00 is same with the state(5) to be set 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:26.533 [2024-07-13 20:02:14.058650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd5c8b0 is same with the state(5) to be set 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 [2024-07-13 20:02:14.089048] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58a400bfe0 is same with the state(5) to be set 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 [2024-07-13 20:02:14.089222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f58a400c600 is same with the state(5) to be set 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 [2024-07-13 20:02:14.089756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd59360 is same with the state(5) to be set 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Read completed with error (sct=0, sc=8) 00:14:26.533 Write completed with error (sct=0, sc=8) 00:14:26.533 [2024-07-13 20:02:14.090462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd59aa0 is same with the state(5) to be set 00:14:26.533 Initializing NVMe Controllers 00:14:26.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.533 Controller IO queue size 128, less than required. 00:14:26.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:26.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:26.534 Initialization complete. Launching workers. 00:14:26.534 ======================================================== 00:14:26.534 Latency(us) 00:14:26.534 Device Information : IOPS MiB/s Average min max 00:14:26.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.93 0.08 908365.20 411.49 1010602.79 00:14:26.534 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.51 0.07 945896.23 412.68 1047120.84 00:14:26.534 ======================================================== 00:14:26.534 Total : 314.44 0.15 926330.29 411.49 1047120.84 00:14:26.534 00:14:26.534 [2024-07-13 20:02:14.091310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5c8b0 (9): Bad file descriptor 00:14:26.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:26.534 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.534 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:26.534 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3148620 00:14:26.534 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3148620 00:14:27.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3148620) - No such process 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3148620 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3148620 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3148620 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.099 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.099 [2024-07-13 20:02:14.615099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3149138 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:27.100 20:02:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.100 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.100 [2024-07-13 20:02:14.668627] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:27.665 20:02:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.665 20:02:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:27.665 20:02:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.230 20:02:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.230 20:02:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:28.230 20:02:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.488 20:02:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.488 20:02:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:28.488 20:02:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.115 20:02:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.115 20:02:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:29.115 20:02:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.680 20:02:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.680 20:02:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:29.680 20:02:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.245 20:02:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.246 20:02:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:30.246 20:02:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.246 Initializing NVMe Controllers 00:14:30.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.246 Controller IO queue size 128, less than required. 00:14:30.246 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:30.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:30.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:30.246 Initialization complete. Launching workers. 00:14:30.246 ======================================================== 00:14:30.246 Latency(us) 00:14:30.246 Device Information : IOPS MiB/s Average min max 00:14:30.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004606.08 1000245.39 1042444.54 00:14:30.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005826.83 1000283.71 1043527.72 00:14:30.246 ======================================================== 00:14:30.246 Total : 256.00 0.12 1005216.45 1000245.39 1043527.72 00:14:30.246 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3149138 00:14:30.502 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3149138) - No such process 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3149138 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.502 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.502 rmmod nvme_tcp 00:14:30.760 rmmod nvme_fabrics 00:14:30.760 rmmod nvme_keyring 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3148598 ']' 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3148598 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 3148598 ']' 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 3148598 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3148598 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3148598' 00:14:30.760 killing process with pid 3148598 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 3148598 00:14:30.760 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 3148598 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.017 20:02:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.920 20:02:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:32.920 00:14:32.920 real 0m12.062s 00:14:32.920 user 0m27.464s 00:14:32.920 sys 0m2.876s 00:14:32.920 20:02:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:32.920 20:02:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:32.920 ************************************ 00:14:32.920 END TEST nvmf_delete_subsystem 00:14:32.920 ************************************ 00:14:32.920 20:02:20 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:32.920 20:02:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:32.920 20:02:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:32.920 20:02:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:32.920 ************************************ 00:14:32.920 START TEST nvmf_ns_masking 00:14:32.920 ************************************ 00:14:32.920 20:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:33.178 * Looking for test storage... 00:14:33.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=f4dbbff9-bf88-4bfb-8394-8b12b2e0af11 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.178 20:02:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:35.077 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:35.077 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:35.077 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:35.077 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.077 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:14:35.078 00:14:35.078 --- 10.0.0.2 ping statistics --- 00:14:35.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.078 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:14:35.078 00:14:35.078 --- 10.0.0.1 ping statistics --- 00:14:35.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.078 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3151479 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3151479 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 3151479 ']' 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:35.078 20:02:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.078 [2024-07-13 20:02:22.727759] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:35.078 [2024-07-13 20:02:22.727841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.336 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.336 [2024-07-13 20:02:22.797421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.336 [2024-07-13 20:02:22.889532] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.336 [2024-07-13 20:02:22.889596] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.336 [2024-07-13 20:02:22.889613] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.336 [2024-07-13 20:02:22.889626] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.336 [2024-07-13 20:02:22.889638] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.336 [2024-07-13 20:02:22.889717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.336 [2024-07-13 20:02:22.889786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.336 [2024-07-13 20:02:22.889814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.336 [2024-07-13 20:02:22.889816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.592 20:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:35.848 [2024-07-13 20:02:23.263412] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.848 20:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:35.848 20:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:35.848 20:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:36.105 Malloc1 00:14:36.105 20:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:36.363 Malloc2 00:14:36.363 20:02:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:36.620 20:02:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:36.877 20:02:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.877 [2024-07-13 20:02:24.532635] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4dbbff9-bf88-4bfb-8394-8b12b2e0af11 -a 10.0.0.2 -s 4420 -i 4 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:37.133 20:02:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:39.029 20:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:39.029 20:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:39.029 20:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:39.287 [ 0]:0x1 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35cd921d24d54a8eb6dbc1f442c6505a 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35cd921d24d54a8eb6dbc1f442c6505a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.287 20:02:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:39.544 [ 0]:0x1 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35cd921d24d54a8eb6dbc1f442c6505a 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35cd921d24d54a8eb6dbc1f442c6505a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:39.544 [ 1]:0x2 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:39.544 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.801 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.057 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:40.331 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:40.331 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4dbbff9-bf88-4bfb-8394-8b12b2e0af11 -a 10.0.0.2 -s 4420 -i 4 00:14:40.331 20:02:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:40.331 20:02:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:40.331 20:02:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.332 20:02:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:14:40.332 20:02:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:14:40.332 20:02:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:42.227 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:42.227 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:42.227 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.484 20:02:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:42.484 [ 0]:0x2 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.484 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:43.050 [ 0]:0x1 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35cd921d24d54a8eb6dbc1f442c6505a 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35cd921d24d54a8eb6dbc1f442c6505a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:43.050 [ 1]:0x2 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.050 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:43.308 [ 0]:0x2 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.308 20:02:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:43.565 20:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:43.565 20:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4dbbff9-bf88-4bfb-8394-8b12b2e0af11 -a 10.0.0.2 -s 4420 -i 4 00:14:43.822 20:02:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:43.822 20:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:14:43.822 20:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.822 20:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:43.822 20:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:43.822 20:02:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:45.733 [ 0]:0x1 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:45.733 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.990 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=35cd921d24d54a8eb6dbc1f442c6505a 00:14:45.990 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 35cd921d24d54a8eb6dbc1f442c6505a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.990 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:45.990 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:45.990 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:45.990 [ 1]:0x2 00:14:45.991 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:45.991 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:45.991 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:45.991 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:45.991 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:46.248 [ 0]:0x2 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:46.248 20:02:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:46.505 [2024-07-13 20:02:34.075577] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:46.505 request: 00:14:46.505 { 00:14:46.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.505 "nsid": 2, 00:14:46.505 "host": "nqn.2016-06.io.spdk:host1", 00:14:46.505 "method": "nvmf_ns_remove_host", 00:14:46.505 "req_id": 1 00:14:46.505 } 00:14:46.505 Got JSON-RPC error response 00:14:46.505 response: 00:14:46.505 { 00:14:46.505 "code": -32602, 00:14:46.505 "message": "Invalid parameters" 00:14:46.505 } 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:46.505 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:46.762 [ 0]:0x2 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=073f525825ae4748a09974c7090ba95b 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 073f525825ae4748a09974c7090ba95b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.762 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:47.020 rmmod nvme_tcp 00:14:47.020 rmmod nvme_fabrics 00:14:47.020 rmmod nvme_keyring 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3151479 ']' 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3151479 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 3151479 ']' 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 3151479 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:47.020 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3151479 00:14:47.278 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:47.278 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:47.278 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3151479' 00:14:47.278 killing process with pid 3151479 00:14:47.278 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 3151479 00:14:47.278 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 3151479 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.537 20:02:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.440 20:02:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:49.440 00:14:49.440 real 0m16.466s 00:14:49.440 user 0m51.493s 00:14:49.440 sys 0m3.784s 00:14:49.440 20:02:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:49.440 20:02:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.440 ************************************ 00:14:49.440 END TEST nvmf_ns_masking 00:14:49.440 ************************************ 00:14:49.440 20:02:37 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:49.440 20:02:37 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:49.440 20:02:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:49.440 20:02:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.440 20:02:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.440 ************************************ 00:14:49.440 START TEST nvmf_nvme_cli 00:14:49.440 ************************************ 00:14:49.440 20:02:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:49.699 * Looking for test storage... 00:14:49.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.699 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:49.700 20:02:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.602 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.602 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:51.602 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:51.602 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:51.603 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:51.603 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:51.603 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:51.603 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.603 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:51.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:14:51.863 00:14:51.863 --- 10.0.0.2 ping statistics --- 00:14:51.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.863 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:51.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:51.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:51.863 00:14:51.863 --- 10.0.0.1 ping statistics --- 00:14:51.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.863 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3154916 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3154916 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 3154916 ']' 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:51.863 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:51.863 [2024-07-13 20:02:39.401983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:51.863 [2024-07-13 20:02:39.402058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.863 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.863 [2024-07-13 20:02:39.465622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.122 [2024-07-13 20:02:39.553908] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.122 [2024-07-13 20:02:39.553974] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.122 [2024-07-13 20:02:39.553988] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.122 [2024-07-13 20:02:39.554000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.122 [2024-07-13 20:02:39.554010] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.122 [2024-07-13 20:02:39.554059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.122 [2024-07-13 20:02:39.554118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.122 [2024-07-13 20:02:39.554185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:52.122 [2024-07-13 20:02:39.554187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 [2024-07-13 20:02:39.697420] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 Malloc0 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 Malloc1 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.122 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.123 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.123 [2024-07-13 20:02:39.780233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.380 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.380 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:52.380 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.380 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 20:02:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.380 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:52.380 00:14:52.380 Discovery Log Number of Records 2, Generation counter 2 00:14:52.380 =====Discovery Log Entry 0====== 00:14:52.380 trtype: tcp 00:14:52.380 adrfam: ipv4 00:14:52.380 subtype: current discovery subsystem 00:14:52.381 treq: not required 00:14:52.381 portid: 0 00:14:52.381 trsvcid: 4420 00:14:52.381 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:52.381 traddr: 10.0.0.2 00:14:52.381 eflags: explicit discovery connections, duplicate discovery information 00:14:52.381 sectype: none 00:14:52.381 =====Discovery Log Entry 1====== 00:14:52.381 trtype: tcp 00:14:52.381 adrfam: ipv4 00:14:52.381 subtype: nvme subsystem 00:14:52.381 treq: not required 00:14:52.381 portid: 0 00:14:52.381 trsvcid: 4420 00:14:52.381 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:52.381 traddr: 10.0.0.2 00:14:52.381 eflags: none 00:14:52.381 sectype: none 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:52.381 20:02:39 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.312 20:02:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:53.312 20:02:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:14:53.312 20:02:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.312 20:02:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:14:53.312 20:02:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:14:53.312 20:02:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:55.209 /dev/nvme0n1 ]] 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.209 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:55.466 20:02:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:55.723 rmmod nvme_tcp 00:14:55.723 rmmod nvme_fabrics 00:14:55.723 rmmod nvme_keyring 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3154916 ']' 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3154916 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 3154916 ']' 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 3154916 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3154916 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3154916' 00:14:55.723 killing process with pid 3154916 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 3154916 00:14:55.723 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 3154916 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.980 20:02:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.510 20:02:45 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.510 00:14:58.510 real 0m8.533s 00:14:58.510 user 0m16.402s 00:14:58.510 sys 0m2.263s 00:14:58.510 20:02:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:58.510 20:02:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:58.510 ************************************ 00:14:58.510 END TEST nvmf_nvme_cli 00:14:58.510 ************************************ 00:14:58.510 20:02:45 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:58.510 20:02:45 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:58.510 20:02:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:58.510 20:02:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:58.510 20:02:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.510 ************************************ 00:14:58.510 START TEST nvmf_vfio_user 00:14:58.510 ************************************ 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:58.510 * Looking for test storage... 00:14:58.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3155843 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3155843' 00:14:58.510 Process pid: 3155843 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3155843 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3155843 ']' 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:58.510 20:02:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:58.510 [2024-07-13 20:02:45.782516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:14:58.510 [2024-07-13 20:02:45.782612] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.510 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.510 [2024-07-13 20:02:45.850530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:58.510 [2024-07-13 20:02:45.942028] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:58.510 [2024-07-13 20:02:45.942089] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:58.510 [2024-07-13 20:02:45.942117] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:58.510 [2024-07-13 20:02:45.942131] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:58.510 [2024-07-13 20:02:45.942143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:58.510 [2024-07-13 20:02:45.942216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.510 [2024-07-13 20:02:45.942300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.510 [2024-07-13 20:02:45.942395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.510 [2024-07-13 20:02:45.942397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.510 20:02:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:58.510 20:02:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:14:58.510 20:02:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:59.443 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:59.701 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:59.959 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:59.959 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.959 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:59.959 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:00.217 Malloc1 00:15:00.217 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:00.510 20:02:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:00.792 20:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:01.050 20:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.050 20:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:01.050 20:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:01.308 Malloc2 00:15:01.308 20:02:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:01.565 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:01.823 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:02.084 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:02.084 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:02.084 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:02.084 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:02.084 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:02.084 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:02.084 [2024-07-13 20:02:49.574305] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:02.084 [2024-07-13 20:02:49.574339] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156274 ] 00:15:02.084 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.084 [2024-07-13 20:02:49.606004] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:02.084 [2024-07-13 20:02:49.615408] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.085 [2024-07-13 20:02:49.615434] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faf38d12000 00:15:02.085 [2024-07-13 20:02:49.616406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.617403] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.618413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.619414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.620418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.621424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.622434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.623434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:02.085 [2024-07-13 20:02:49.624446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:02.085 [2024-07-13 20:02:49.624465] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faf37ac8000 00:15:02.085 [2024-07-13 20:02:49.625578] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.085 [2024-07-13 20:02:49.640457] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:02.085 [2024-07-13 20:02:49.640498] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:02.085 [2024-07-13 20:02:49.645556] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:02.085 [2024-07-13 20:02:49.645609] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:02.085 [2024-07-13 20:02:49.645695] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:02.085 [2024-07-13 20:02:49.645722] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:02.085 [2024-07-13 20:02:49.645732] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:02.085 [2024-07-13 20:02:49.646558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:02.085 [2024-07-13 20:02:49.646581] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:02.085 [2024-07-13 20:02:49.646594] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:02.085 [2024-07-13 20:02:49.647562] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:02.085 [2024-07-13 20:02:49.647580] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:02.085 [2024-07-13 20:02:49.647593] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:02.085 [2024-07-13 20:02:49.648569] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:02.085 [2024-07-13 20:02:49.648586] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:02.085 [2024-07-13 20:02:49.649573] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:02.085 [2024-07-13 20:02:49.649591] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:02.085 [2024-07-13 20:02:49.649600] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:02.085 [2024-07-13 20:02:49.649611] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:02.085 [2024-07-13 20:02:49.649720] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:02.085 [2024-07-13 20:02:49.649727] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:02.085 [2024-07-13 20:02:49.649736] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:02.085 [2024-07-13 20:02:49.650586] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:02.085 [2024-07-13 20:02:49.651587] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:02.085 [2024-07-13 20:02:49.652593] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:02.085 [2024-07-13 20:02:49.653591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.085 [2024-07-13 20:02:49.653720] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:02.085 [2024-07-13 20:02:49.654874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:02.085 [2024-07-13 20:02:49.654892] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:02.085 [2024-07-13 20:02:49.654901] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:02.085 [2024-07-13 20:02:49.654924] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:02.085 [2024-07-13 20:02:49.654938] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:02.085 [2024-07-13 20:02:49.654964] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.085 [2024-07-13 20:02:49.654973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.085 [2024-07-13 20:02:49.654992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.085 [2024-07-13 20:02:49.655057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:02.085 [2024-07-13 20:02:49.655079] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:02.085 [2024-07-13 20:02:49.655088] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:02.085 [2024-07-13 20:02:49.655095] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:02.085 [2024-07-13 20:02:49.655103] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:02.085 [2024-07-13 20:02:49.655111] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:02.085 [2024-07-13 20:02:49.655119] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:02.085 [2024-07-13 20:02:49.655127] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:02.085 [2024-07-13 20:02:49.655138] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:02.085 [2024-07-13 20:02:49.655161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:02.085 [2024-07-13 20:02:49.655194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:02.085 [2024-07-13 20:02:49.655222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.085 [2024-07-13 20:02:49.655235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.085 [2024-07-13 20:02:49.655247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.085 [2024-07-13 20:02:49.655258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:02.085 [2024-07-13 20:02:49.655267] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:02.085 [2024-07-13 20:02:49.655285] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:02.085 [2024-07-13 20:02:49.655299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655321] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:02.086 [2024-07-13 20:02:49.655330] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655340] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655352] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655439] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655454] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655466] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:02.086 [2024-07-13 20:02:49.655474] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:02.086 [2024-07-13 20:02:49.655484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655513] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:02.086 [2024-07-13 20:02:49.655531] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655544] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655555] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.086 [2024-07-13 20:02:49.655563] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.086 [2024-07-13 20:02:49.655572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655610] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655624] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655636] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:02.086 [2024-07-13 20:02:49.655647] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.086 [2024-07-13 20:02:49.655657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655685] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655695] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655708] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655717] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655726] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655734] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:02.086 [2024-07-13 20:02:49.655741] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:02.086 [2024-07-13 20:02:49.655749] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:02.086 [2024-07-13 20:02:49.655775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.655923] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:02.086 [2024-07-13 20:02:49.655932] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:02.086 [2024-07-13 20:02:49.655938] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:02.086 [2024-07-13 20:02:49.655944] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:02.086 [2024-07-13 20:02:49.655954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:02.086 [2024-07-13 20:02:49.655965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:02.086 [2024-07-13 20:02:49.655973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:02.086 [2024-07-13 20:02:49.655982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.655997] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:02.086 [2024-07-13 20:02:49.656006] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:02.086 [2024-07-13 20:02:49.656014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.656026] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:02.086 [2024-07-13 20:02:49.656035] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:02.086 [2024-07-13 20:02:49.656043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:02.086 [2024-07-13 20:02:49.656055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.656075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.656090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:02.086 [2024-07-13 20:02:49.656107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:02.086 ===================================================== 00:15:02.086 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.086 ===================================================== 00:15:02.086 Controller Capabilities/Features 00:15:02.086 ================================ 00:15:02.086 Vendor ID: 4e58 00:15:02.086 Subsystem Vendor ID: 4e58 00:15:02.086 Serial Number: SPDK1 00:15:02.086 Model Number: SPDK bdev Controller 00:15:02.086 Firmware Version: 24.05.1 00:15:02.086 Recommended Arb Burst: 6 00:15:02.086 IEEE OUI Identifier: 8d 6b 50 00:15:02.086 Multi-path I/O 00:15:02.086 May have multiple subsystem ports: Yes 00:15:02.086 May have multiple controllers: Yes 00:15:02.086 Associated with SR-IOV VF: No 00:15:02.086 Max Data Transfer Size: 131072 00:15:02.086 Max Number of Namespaces: 32 00:15:02.086 Max Number of I/O Queues: 127 00:15:02.086 NVMe Specification Version (VS): 1.3 00:15:02.086 NVMe Specification Version (Identify): 1.3 00:15:02.086 Maximum Queue Entries: 256 00:15:02.086 Contiguous Queues Required: Yes 00:15:02.086 Arbitration Mechanisms Supported 00:15:02.086 Weighted Round Robin: Not Supported 00:15:02.086 Vendor Specific: Not Supported 00:15:02.086 Reset Timeout: 15000 ms 00:15:02.087 Doorbell Stride: 4 bytes 00:15:02.087 NVM Subsystem Reset: Not Supported 00:15:02.087 Command Sets Supported 00:15:02.087 NVM Command Set: Supported 00:15:02.087 Boot Partition: Not Supported 00:15:02.087 Memory Page Size Minimum: 4096 bytes 00:15:02.087 Memory Page Size Maximum: 4096 bytes 00:15:02.087 Persistent Memory Region: Not Supported 00:15:02.087 Optional Asynchronous Events Supported 00:15:02.087 Namespace Attribute Notices: Supported 00:15:02.087 Firmware Activation Notices: Not Supported 00:15:02.087 ANA Change Notices: Not Supported 00:15:02.087 PLE Aggregate Log Change Notices: Not Supported 00:15:02.087 LBA Status Info Alert Notices: Not Supported 00:15:02.087 EGE Aggregate Log Change Notices: Not Supported 00:15:02.087 Normal NVM Subsystem Shutdown event: Not Supported 00:15:02.087 Zone Descriptor Change Notices: Not Supported 00:15:02.087 Discovery Log Change Notices: Not Supported 00:15:02.087 Controller Attributes 00:15:02.087 128-bit Host Identifier: Supported 00:15:02.087 Non-Operational Permissive Mode: Not Supported 00:15:02.087 NVM Sets: Not Supported 00:15:02.087 Read Recovery Levels: Not Supported 00:15:02.087 Endurance Groups: Not Supported 00:15:02.087 Predictable Latency Mode: Not Supported 00:15:02.087 Traffic Based Keep ALive: Not Supported 00:15:02.087 Namespace Granularity: Not Supported 00:15:02.087 SQ Associations: Not Supported 00:15:02.087 UUID List: Not Supported 00:15:02.087 Multi-Domain Subsystem: Not Supported 00:15:02.087 Fixed Capacity Management: Not Supported 00:15:02.087 Variable Capacity Management: Not Supported 00:15:02.087 Delete Endurance Group: Not Supported 00:15:02.087 Delete NVM Set: Not Supported 00:15:02.087 Extended LBA Formats Supported: Not Supported 00:15:02.087 Flexible Data Placement Supported: Not Supported 00:15:02.087 00:15:02.087 Controller Memory Buffer Support 00:15:02.087 ================================ 00:15:02.087 Supported: No 00:15:02.087 00:15:02.087 Persistent Memory Region Support 00:15:02.087 ================================ 00:15:02.087 Supported: No 00:15:02.087 00:15:02.087 Admin Command Set Attributes 00:15:02.087 ============================ 00:15:02.087 Security Send/Receive: Not Supported 00:15:02.087 Format NVM: Not Supported 00:15:02.087 Firmware Activate/Download: Not Supported 00:15:02.087 Namespace Management: Not Supported 00:15:02.087 Device Self-Test: Not Supported 00:15:02.087 Directives: Not Supported 00:15:02.087 NVMe-MI: Not Supported 00:15:02.087 Virtualization Management: Not Supported 00:15:02.087 Doorbell Buffer Config: Not Supported 00:15:02.087 Get LBA Status Capability: Not Supported 00:15:02.087 Command & Feature Lockdown Capability: Not Supported 00:15:02.087 Abort Command Limit: 4 00:15:02.087 Async Event Request Limit: 4 00:15:02.087 Number of Firmware Slots: N/A 00:15:02.087 Firmware Slot 1 Read-Only: N/A 00:15:02.087 Firmware Activation Without Reset: N/A 00:15:02.087 Multiple Update Detection Support: N/A 00:15:02.087 Firmware Update Granularity: No Information Provided 00:15:02.087 Per-Namespace SMART Log: No 00:15:02.087 Asymmetric Namespace Access Log Page: Not Supported 00:15:02.087 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:02.087 Command Effects Log Page: Supported 00:15:02.087 Get Log Page Extended Data: Supported 00:15:02.087 Telemetry Log Pages: Not Supported 00:15:02.087 Persistent Event Log Pages: Not Supported 00:15:02.087 Supported Log Pages Log Page: May Support 00:15:02.087 Commands Supported & Effects Log Page: Not Supported 00:15:02.087 Feature Identifiers & Effects Log Page:May Support 00:15:02.087 NVMe-MI Commands & Effects Log Page: May Support 00:15:02.087 Data Area 4 for Telemetry Log: Not Supported 00:15:02.087 Error Log Page Entries Supported: 128 00:15:02.087 Keep Alive: Supported 00:15:02.087 Keep Alive Granularity: 10000 ms 00:15:02.087 00:15:02.087 NVM Command Set Attributes 00:15:02.087 ========================== 00:15:02.087 Submission Queue Entry Size 00:15:02.087 Max: 64 00:15:02.087 Min: 64 00:15:02.087 Completion Queue Entry Size 00:15:02.087 Max: 16 00:15:02.087 Min: 16 00:15:02.087 Number of Namespaces: 32 00:15:02.087 Compare Command: Supported 00:15:02.087 Write Uncorrectable Command: Not Supported 00:15:02.087 Dataset Management Command: Supported 00:15:02.087 Write Zeroes Command: Supported 00:15:02.087 Set Features Save Field: Not Supported 00:15:02.087 Reservations: Not Supported 00:15:02.087 Timestamp: Not Supported 00:15:02.087 Copy: Supported 00:15:02.087 Volatile Write Cache: Present 00:15:02.087 Atomic Write Unit (Normal): 1 00:15:02.087 Atomic Write Unit (PFail): 1 00:15:02.087 Atomic Compare & Write Unit: 1 00:15:02.087 Fused Compare & Write: Supported 00:15:02.087 Scatter-Gather List 00:15:02.087 SGL Command Set: Supported (Dword aligned) 00:15:02.087 SGL Keyed: Not Supported 00:15:02.087 SGL Bit Bucket Descriptor: Not Supported 00:15:02.087 SGL Metadata Pointer: Not Supported 00:15:02.087 Oversized SGL: Not Supported 00:15:02.087 SGL Metadata Address: Not Supported 00:15:02.087 SGL Offset: Not Supported 00:15:02.087 Transport SGL Data Block: Not Supported 00:15:02.087 Replay Protected Memory Block: Not Supported 00:15:02.087 00:15:02.087 Firmware Slot Information 00:15:02.087 ========================= 00:15:02.087 Active slot: 1 00:15:02.087 Slot 1 Firmware Revision: 24.05.1 00:15:02.087 00:15:02.087 00:15:02.087 Commands Supported and Effects 00:15:02.087 ============================== 00:15:02.087 Admin Commands 00:15:02.087 -------------- 00:15:02.087 Get Log Page (02h): Supported 00:15:02.087 Identify (06h): Supported 00:15:02.087 Abort (08h): Supported 00:15:02.087 Set Features (09h): Supported 00:15:02.087 Get Features (0Ah): Supported 00:15:02.087 Asynchronous Event Request (0Ch): Supported 00:15:02.087 Keep Alive (18h): Supported 00:15:02.087 I/O Commands 00:15:02.087 ------------ 00:15:02.087 Flush (00h): Supported LBA-Change 00:15:02.087 Write (01h): Supported LBA-Change 00:15:02.087 Read (02h): Supported 00:15:02.087 Compare (05h): Supported 00:15:02.087 Write Zeroes (08h): Supported LBA-Change 00:15:02.087 Dataset Management (09h): Supported LBA-Change 00:15:02.087 Copy (19h): Supported LBA-Change 00:15:02.087 Unknown (79h): Supported LBA-Change 00:15:02.087 Unknown (7Ah): Supported 00:15:02.087 00:15:02.087 Error Log 00:15:02.087 ========= 00:15:02.087 00:15:02.087 Arbitration 00:15:02.087 =========== 00:15:02.087 Arbitration Burst: 1 00:15:02.087 00:15:02.087 Power Management 00:15:02.087 ================ 00:15:02.087 Number of Power States: 1 00:15:02.088 Current Power State: Power State #0 00:15:02.088 Power State #0: 00:15:02.088 Max Power: 0.00 W 00:15:02.088 Non-Operational State: Operational 00:15:02.088 Entry Latency: Not Reported 00:15:02.088 Exit Latency: Not Reported 00:15:02.088 Relative Read Throughput: 0 00:15:02.088 Relative Read Latency: 0 00:15:02.088 Relative Write Throughput: 0 00:15:02.088 Relative Write Latency: 0 00:15:02.088 Idle Power: Not Reported 00:15:02.088 Active Power: Not Reported 00:15:02.088 Non-Operational Permissive Mode: Not Supported 00:15:02.088 00:15:02.088 Health Information 00:15:02.088 ================== 00:15:02.088 Critical Warnings: 00:15:02.088 Available Spare Space: OK 00:15:02.088 Temperature: OK 00:15:02.088 Device Reliability: OK 00:15:02.088 Read Only: No 00:15:02.088 Volatile Memory Backup: OK 00:15:02.088 Current Temperature: 0 Kelvin[2024-07-13 20:02:49.656248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:02.088 [2024-07-13 20:02:49.656264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:02.088 [2024-07-13 20:02:49.656298] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:02.088 [2024-07-13 20:02:49.656314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.088 [2024-07-13 20:02:49.656325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.088 [2024-07-13 20:02:49.656335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.088 [2024-07-13 20:02:49.656344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:02.088 [2024-07-13 20:02:49.659876] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:02.088 [2024-07-13 20:02:49.659897] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:02.088 [2024-07-13 20:02:49.660650] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.088 [2024-07-13 20:02:49.660724] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:02.088 [2024-07-13 20:02:49.660738] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:02.088 [2024-07-13 20:02:49.661653] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:02.088 [2024-07-13 20:02:49.661675] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:02.088 [2024-07-13 20:02:49.661741] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:02.088 [2024-07-13 20:02:49.663691] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:02.088 (-273 Celsius) 00:15:02.088 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:02.088 Available Spare: 0% 00:15:02.088 Available Spare Threshold: 0% 00:15:02.088 Life Percentage Used: 0% 00:15:02.088 Data Units Read: 0 00:15:02.088 Data Units Written: 0 00:15:02.088 Host Read Commands: 0 00:15:02.088 Host Write Commands: 0 00:15:02.088 Controller Busy Time: 0 minutes 00:15:02.088 Power Cycles: 0 00:15:02.088 Power On Hours: 0 hours 00:15:02.088 Unsafe Shutdowns: 0 00:15:02.088 Unrecoverable Media Errors: 0 00:15:02.088 Lifetime Error Log Entries: 0 00:15:02.088 Warning Temperature Time: 0 minutes 00:15:02.088 Critical Temperature Time: 0 minutes 00:15:02.088 00:15:02.088 Number of Queues 00:15:02.088 ================ 00:15:02.088 Number of I/O Submission Queues: 127 00:15:02.088 Number of I/O Completion Queues: 127 00:15:02.088 00:15:02.088 Active Namespaces 00:15:02.088 ================= 00:15:02.088 Namespace ID:1 00:15:02.088 Error Recovery Timeout: Unlimited 00:15:02.088 Command Set Identifier: NVM (00h) 00:15:02.088 Deallocate: Supported 00:15:02.088 Deallocated/Unwritten Error: Not Supported 00:15:02.088 Deallocated Read Value: Unknown 00:15:02.088 Deallocate in Write Zeroes: Not Supported 00:15:02.088 Deallocated Guard Field: 0xFFFF 00:15:02.088 Flush: Supported 00:15:02.088 Reservation: Supported 00:15:02.088 Namespace Sharing Capabilities: Multiple Controllers 00:15:02.088 Size (in LBAs): 131072 (0GiB) 00:15:02.088 Capacity (in LBAs): 131072 (0GiB) 00:15:02.088 Utilization (in LBAs): 131072 (0GiB) 00:15:02.088 NGUID: B18EB594138744E4A340A67E30779285 00:15:02.088 UUID: b18eb594-1387-44e4-a340-a67e30779285 00:15:02.088 Thin Provisioning: Not Supported 00:15:02.088 Per-NS Atomic Units: Yes 00:15:02.088 Atomic Boundary Size (Normal): 0 00:15:02.088 Atomic Boundary Size (PFail): 0 00:15:02.088 Atomic Boundary Offset: 0 00:15:02.088 Maximum Single Source Range Length: 65535 00:15:02.088 Maximum Copy Length: 65535 00:15:02.088 Maximum Source Range Count: 1 00:15:02.088 NGUID/EUI64 Never Reused: No 00:15:02.088 Namespace Write Protected: No 00:15:02.088 Number of LBA Formats: 1 00:15:02.088 Current LBA Format: LBA Format #00 00:15:02.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:02.088 00:15:02.088 20:02:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:02.088 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.347 [2024-07-13 20:02:49.893792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.608 Initializing NVMe Controllers 00:15:07.608 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:07.608 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:07.608 Initialization complete. Launching workers. 00:15:07.608 ======================================================== 00:15:07.608 Latency(us) 00:15:07.608 Device Information : IOPS MiB/s Average min max 00:15:07.608 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35322.24 137.98 3623.18 1185.30 8325.35 00:15:07.608 ======================================================== 00:15:07.609 Total : 35322.24 137.98 3623.18 1185.30 8325.35 00:15:07.609 00:15:07.609 [2024-07-13 20:02:54.919476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:07.609 20:02:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:07.609 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.609 [2024-07-13 20:02:55.152595] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.873 Initializing NVMe Controllers 00:15:12.873 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.873 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:12.873 Initialization complete. Launching workers. 00:15:12.873 ======================================================== 00:15:12.873 Latency(us) 00:15:12.873 Device Information : IOPS MiB/s Average min max 00:15:12.873 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.78 5984.53 11968.28 00:15:12.873 ======================================================== 00:15:12.873 Total : 16051.20 62.70 7982.78 5984.53 11968.28 00:15:12.873 00:15:12.873 [2024-07-13 20:03:00.187395] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.873 20:03:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:12.873 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.873 [2024-07-13 20:03:00.388356] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:18.135 [2024-07-13 20:03:05.463211] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:18.135 Initializing NVMe Controllers 00:15:18.135 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.135 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:18.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:18.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:18.135 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:18.135 Initialization complete. Launching workers. 00:15:18.135 Starting thread on core 2 00:15:18.135 Starting thread on core 3 00:15:18.135 Starting thread on core 1 00:15:18.135 20:03:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:18.135 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.135 [2024-07-13 20:03:05.768329] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.411 [2024-07-13 20:03:08.829576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.411 Initializing NVMe Controllers 00:15:21.411 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.411 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.411 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:21.411 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:21.411 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:21.411 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:21.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:21.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:21.411 Initialization complete. Launching workers. 00:15:21.411 Starting thread on core 1 with urgent priority queue 00:15:21.411 Starting thread on core 2 with urgent priority queue 00:15:21.411 Starting thread on core 3 with urgent priority queue 00:15:21.411 Starting thread on core 0 with urgent priority queue 00:15:21.411 SPDK bdev Controller (SPDK1 ) core 0: 4872.00 IO/s 20.53 secs/100000 ios 00:15:21.411 SPDK bdev Controller (SPDK1 ) core 1: 5996.33 IO/s 16.68 secs/100000 ios 00:15:21.411 SPDK bdev Controller (SPDK1 ) core 2: 5902.67 IO/s 16.94 secs/100000 ios 00:15:21.411 SPDK bdev Controller (SPDK1 ) core 3: 5722.33 IO/s 17.48 secs/100000 ios 00:15:21.411 ======================================================== 00:15:21.411 00:15:21.411 20:03:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:21.411 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.668 [2024-07-13 20:03:09.129447] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:21.668 Initializing NVMe Controllers 00:15:21.668 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.668 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.668 Namespace ID: 1 size: 0GB 00:15:21.668 Initialization complete. 00:15:21.668 INFO: using host memory buffer for IO 00:15:21.668 Hello world! 00:15:21.668 [2024-07-13 20:03:09.167046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:21.668 20:03:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:21.668 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.924 [2024-07-13 20:03:09.453417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.857 Initializing NVMe Controllers 00:15:22.857 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.857 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.857 Initialization complete. Launching workers. 00:15:22.857 submit (in ns) avg, min, max = 7337.7, 3514.4, 4022014.4 00:15:22.857 complete (in ns) avg, min, max = 27190.0, 2068.9, 5011516.7 00:15:22.857 00:15:22.857 Submit histogram 00:15:22.857 ================ 00:15:22.857 Range in us Cumulative Count 00:15:22.857 3.508 - 3.532: 0.3615% ( 48) 00:15:22.857 3.532 - 3.556: 1.4685% ( 147) 00:15:22.857 3.556 - 3.579: 4.4732% ( 399) 00:15:22.857 3.579 - 3.603: 9.9104% ( 722) 00:15:22.857 3.603 - 3.627: 18.7288% ( 1171) 00:15:22.857 3.627 - 3.650: 28.0669% ( 1240) 00:15:22.857 3.650 - 3.674: 35.3942% ( 973) 00:15:22.857 3.674 - 3.698: 42.1493% ( 897) 00:15:22.857 3.698 - 3.721: 49.4841% ( 974) 00:15:22.857 3.721 - 3.745: 55.1698% ( 755) 00:15:22.857 3.745 - 3.769: 60.0196% ( 644) 00:15:22.857 3.769 - 3.793: 63.6720% ( 485) 00:15:22.857 3.793 - 3.816: 66.4433% ( 368) 00:15:22.857 3.816 - 3.840: 70.0279% ( 476) 00:15:22.857 3.840 - 3.864: 73.9438% ( 520) 00:15:22.857 3.864 - 3.887: 77.5736% ( 482) 00:15:22.857 3.887 - 3.911: 80.8946% ( 441) 00:15:22.857 3.911 - 3.935: 83.7638% ( 381) 00:15:22.857 3.935 - 3.959: 85.6691% ( 253) 00:15:22.857 3.959 - 3.982: 87.5819% ( 254) 00:15:22.857 3.982 - 4.006: 89.1182% ( 204) 00:15:22.857 4.006 - 4.030: 90.2252% ( 147) 00:15:22.857 4.030 - 4.053: 91.0761% ( 113) 00:15:22.857 4.053 - 4.077: 91.9798% ( 120) 00:15:22.857 4.077 - 4.101: 92.9136% ( 124) 00:15:22.857 4.101 - 4.124: 93.5914% ( 90) 00:15:22.857 4.124 - 4.148: 94.0733% ( 64) 00:15:22.857 4.148 - 4.172: 94.4725% ( 53) 00:15:22.857 4.172 - 4.196: 94.7737% ( 40) 00:15:22.857 4.196 - 4.219: 94.9996% ( 30) 00:15:22.857 4.219 - 4.243: 95.2030% ( 27) 00:15:22.857 4.243 - 4.267: 95.3611% ( 21) 00:15:22.857 4.267 - 4.290: 95.4741% ( 15) 00:15:22.857 4.290 - 4.314: 95.5569% ( 11) 00:15:22.857 4.314 - 4.338: 95.6548% ( 13) 00:15:22.857 4.338 - 4.361: 95.7376% ( 11) 00:15:22.858 4.361 - 4.385: 95.7903% ( 7) 00:15:22.858 4.385 - 4.409: 95.8506% ( 8) 00:15:22.858 4.409 - 4.433: 95.9108% ( 8) 00:15:22.858 4.433 - 4.456: 95.9410% ( 4) 00:15:22.858 4.456 - 4.480: 95.9711% ( 4) 00:15:22.858 4.480 - 4.504: 96.0087% ( 5) 00:15:22.858 4.504 - 4.527: 96.0238% ( 2) 00:15:22.858 4.551 - 4.575: 96.0539% ( 4) 00:15:22.858 4.575 - 4.599: 96.0991% ( 6) 00:15:22.858 4.599 - 4.622: 96.1142% ( 2) 00:15:22.858 4.622 - 4.646: 96.1518% ( 5) 00:15:22.858 4.646 - 4.670: 96.2045% ( 7) 00:15:22.858 4.670 - 4.693: 96.2497% ( 6) 00:15:22.858 4.693 - 4.717: 96.2798% ( 4) 00:15:22.858 4.717 - 4.741: 96.3250% ( 6) 00:15:22.858 4.741 - 4.764: 96.3401% ( 2) 00:15:22.858 4.764 - 4.788: 96.4455% ( 14) 00:15:22.858 4.788 - 4.812: 96.4832% ( 5) 00:15:22.858 4.812 - 4.836: 96.5359% ( 7) 00:15:22.858 4.836 - 4.859: 96.5660% ( 4) 00:15:22.858 4.859 - 4.883: 96.5886% ( 3) 00:15:22.858 4.883 - 4.907: 96.6338% ( 6) 00:15:22.858 4.907 - 4.930: 96.7016% ( 9) 00:15:22.858 4.930 - 4.954: 96.7693% ( 9) 00:15:22.858 4.954 - 4.978: 96.7995% ( 4) 00:15:22.858 4.978 - 5.001: 96.8220% ( 3) 00:15:22.858 5.001 - 5.025: 96.8371% ( 2) 00:15:22.858 5.025 - 5.049: 96.8748% ( 5) 00:15:22.858 5.049 - 5.073: 96.9199% ( 6) 00:15:22.858 5.073 - 5.096: 96.9425% ( 3) 00:15:22.858 5.096 - 5.120: 96.9877% ( 6) 00:15:22.858 5.120 - 5.144: 96.9953% ( 1) 00:15:22.858 5.144 - 5.167: 97.0178% ( 3) 00:15:22.858 5.167 - 5.191: 97.0329% ( 2) 00:15:22.858 5.191 - 5.215: 97.0404% ( 1) 00:15:22.858 5.215 - 5.239: 97.0630% ( 3) 00:15:22.858 5.239 - 5.262: 97.0856% ( 3) 00:15:22.858 5.262 - 5.286: 97.1007% ( 2) 00:15:22.858 5.286 - 5.310: 97.1383% ( 5) 00:15:22.858 5.333 - 5.357: 97.1685% ( 4) 00:15:22.858 5.357 - 5.381: 97.1986% ( 4) 00:15:22.858 5.381 - 5.404: 97.2061% ( 1) 00:15:22.858 5.404 - 5.428: 97.2287% ( 3) 00:15:22.858 5.428 - 5.452: 97.2362% ( 1) 00:15:22.858 5.452 - 5.476: 97.2438% ( 1) 00:15:22.858 5.476 - 5.499: 97.2513% ( 1) 00:15:22.858 5.499 - 5.523: 97.2588% ( 1) 00:15:22.858 5.523 - 5.547: 97.2814% ( 3) 00:15:22.858 5.547 - 5.570: 97.2890% ( 1) 00:15:22.858 5.570 - 5.594: 97.3040% ( 2) 00:15:22.858 5.618 - 5.641: 97.3115% ( 1) 00:15:22.858 5.689 - 5.713: 97.3341% ( 3) 00:15:22.858 5.713 - 5.736: 97.3492% ( 2) 00:15:22.858 5.736 - 5.760: 97.3643% ( 2) 00:15:22.858 5.760 - 5.784: 97.4019% ( 5) 00:15:22.858 5.784 - 5.807: 97.4170% ( 2) 00:15:22.858 5.807 - 5.831: 97.4471% ( 4) 00:15:22.858 5.831 - 5.855: 97.4546% ( 1) 00:15:22.858 5.902 - 5.926: 97.4622% ( 1) 00:15:22.858 5.950 - 5.973: 97.4697% ( 1) 00:15:22.858 5.997 - 6.021: 97.4772% ( 1) 00:15:22.858 6.021 - 6.044: 97.4848% ( 1) 00:15:22.858 6.068 - 6.116: 97.4923% ( 1) 00:15:22.858 6.163 - 6.210: 97.4998% ( 1) 00:15:22.858 6.210 - 6.258: 97.5073% ( 1) 00:15:22.858 6.258 - 6.305: 97.5224% ( 2) 00:15:22.858 6.305 - 6.353: 97.5375% ( 2) 00:15:22.858 6.353 - 6.400: 97.5450% ( 1) 00:15:22.858 6.542 - 6.590: 97.5525% ( 1) 00:15:22.858 6.590 - 6.637: 97.5601% ( 1) 00:15:22.858 6.779 - 6.827: 97.5676% ( 1) 00:15:22.858 6.827 - 6.874: 97.5902% ( 3) 00:15:22.858 6.921 - 6.969: 97.5977% ( 1) 00:15:22.858 6.969 - 7.016: 97.6128% ( 2) 00:15:22.858 7.064 - 7.111: 97.6278% ( 2) 00:15:22.858 7.111 - 7.159: 97.6504% ( 3) 00:15:22.858 7.159 - 7.206: 97.6580% ( 1) 00:15:22.858 7.253 - 7.301: 97.6881% ( 4) 00:15:22.858 7.301 - 7.348: 97.7031% ( 2) 00:15:22.858 7.348 - 7.396: 97.7182% ( 2) 00:15:22.858 7.396 - 7.443: 97.7257% ( 1) 00:15:22.858 7.443 - 7.490: 97.7483% ( 3) 00:15:22.858 7.538 - 7.585: 97.7634% ( 2) 00:15:22.858 7.633 - 7.680: 97.7860% ( 3) 00:15:22.858 7.680 - 7.727: 97.7935% ( 1) 00:15:22.858 7.727 - 7.775: 97.8086% ( 2) 00:15:22.858 7.775 - 7.822: 97.8236% ( 2) 00:15:22.858 7.822 - 7.870: 97.8387% ( 2) 00:15:22.858 7.870 - 7.917: 97.8613% ( 3) 00:15:22.858 7.917 - 7.964: 97.8763% ( 2) 00:15:22.858 7.964 - 8.012: 97.8914% ( 2) 00:15:22.858 8.012 - 8.059: 97.8989% ( 1) 00:15:22.858 8.059 - 8.107: 97.9140% ( 2) 00:15:22.858 8.107 - 8.154: 97.9215% ( 1) 00:15:22.858 8.154 - 8.201: 97.9366% ( 2) 00:15:22.858 8.201 - 8.249: 97.9441% ( 1) 00:15:22.858 8.249 - 8.296: 97.9592% ( 2) 00:15:22.858 8.296 - 8.344: 97.9742% ( 2) 00:15:22.858 8.439 - 8.486: 97.9893% ( 2) 00:15:22.858 8.486 - 8.533: 98.0119% ( 3) 00:15:22.858 8.533 - 8.581: 98.0194% ( 1) 00:15:22.858 8.581 - 8.628: 98.0270% ( 1) 00:15:22.858 8.628 - 8.676: 98.0345% ( 1) 00:15:22.858 8.676 - 8.723: 98.0420% ( 1) 00:15:22.858 8.723 - 8.770: 98.0646% ( 3) 00:15:22.858 8.770 - 8.818: 98.0797% ( 2) 00:15:22.858 8.818 - 8.865: 98.0872% ( 1) 00:15:22.858 8.865 - 8.913: 98.1023% ( 2) 00:15:22.858 8.960 - 9.007: 98.1173% ( 2) 00:15:22.858 9.007 - 9.055: 98.1324% ( 2) 00:15:22.858 9.055 - 9.102: 98.1550% ( 3) 00:15:22.858 9.102 - 9.150: 98.1700% ( 2) 00:15:22.858 9.150 - 9.197: 98.1851% ( 2) 00:15:22.858 9.244 - 9.292: 98.2002% ( 2) 00:15:22.858 9.292 - 9.339: 98.2077% ( 1) 00:15:22.858 9.339 - 9.387: 98.2152% ( 1) 00:15:22.858 9.387 - 9.434: 98.2303% ( 2) 00:15:22.858 9.434 - 9.481: 98.2453% ( 2) 00:15:22.858 9.481 - 9.529: 98.2604% ( 2) 00:15:22.858 9.529 - 9.576: 98.2679% ( 1) 00:15:22.858 9.624 - 9.671: 98.2830% ( 2) 00:15:22.858 9.671 - 9.719: 98.2905% ( 1) 00:15:22.858 9.813 - 9.861: 98.2981% ( 1) 00:15:22.858 9.861 - 9.908: 98.3207% ( 3) 00:15:22.858 9.908 - 9.956: 98.3282% ( 1) 00:15:22.858 9.956 - 10.003: 98.3357% ( 1) 00:15:22.858 10.003 - 10.050: 98.3432% ( 1) 00:15:22.858 10.050 - 10.098: 98.3583% ( 2) 00:15:22.858 10.098 - 10.145: 98.3734% ( 2) 00:15:22.858 10.145 - 10.193: 98.3809% ( 1) 00:15:22.858 10.382 - 10.430: 98.3884% ( 1) 00:15:22.858 10.477 - 10.524: 98.4035% ( 2) 00:15:22.858 10.667 - 10.714: 98.4110% ( 1) 00:15:22.858 10.761 - 10.809: 98.4186% ( 1) 00:15:22.858 10.809 - 10.856: 98.4261% ( 1) 00:15:22.858 10.904 - 10.951: 98.4336% ( 1) 00:15:22.858 11.093 - 11.141: 98.4411% ( 1) 00:15:22.858 11.141 - 11.188: 98.4562% ( 2) 00:15:22.858 11.236 - 11.283: 98.4637% ( 1) 00:15:22.858 11.330 - 11.378: 98.4713% ( 1) 00:15:22.858 11.473 - 11.520: 98.4788% ( 1) 00:15:22.858 11.757 - 11.804: 98.4863% ( 1) 00:15:22.858 11.899 - 11.947: 98.4939% ( 1) 00:15:22.858 11.947 - 11.994: 98.5014% ( 1) 00:15:22.858 12.041 - 12.089: 98.5089% ( 1) 00:15:22.858 12.136 - 12.231: 98.5240% ( 2) 00:15:22.858 12.231 - 12.326: 98.5315% ( 1) 00:15:22.858 12.326 - 12.421: 98.5541% ( 3) 00:15:22.858 12.421 - 12.516: 98.5692% ( 2) 00:15:22.858 12.516 - 12.610: 98.5767% ( 1) 00:15:22.858 12.705 - 12.800: 98.5842% ( 1) 00:15:22.858 12.800 - 12.895: 98.5918% ( 1) 00:15:22.858 12.895 - 12.990: 98.6068% ( 2) 00:15:22.858 12.990 - 13.084: 98.6219% ( 2) 00:15:22.858 13.084 - 13.179: 98.6595% ( 5) 00:15:22.858 13.179 - 13.274: 98.6671% ( 1) 00:15:22.858 13.274 - 13.369: 98.6821% ( 2) 00:15:22.858 13.369 - 13.464: 98.6897% ( 1) 00:15:22.858 13.464 - 13.559: 98.7123% ( 3) 00:15:22.858 13.559 - 13.653: 98.7198% ( 1) 00:15:22.858 13.653 - 13.748: 98.7273% ( 1) 00:15:22.858 13.748 - 13.843: 98.7499% ( 3) 00:15:22.858 13.843 - 13.938: 98.7650% ( 2) 00:15:22.858 14.033 - 14.127: 98.7725% ( 1) 00:15:22.858 14.222 - 14.317: 98.7876% ( 2) 00:15:22.858 14.317 - 14.412: 98.8252% ( 5) 00:15:22.858 14.412 - 14.507: 98.8327% ( 1) 00:15:22.858 14.507 - 14.601: 98.8403% ( 1) 00:15:22.858 14.601 - 14.696: 98.8478% ( 1) 00:15:22.858 14.791 - 14.886: 98.8704% ( 3) 00:15:22.858 14.886 - 14.981: 98.8779% ( 1) 00:15:22.858 14.981 - 15.076: 98.8930% ( 2) 00:15:22.858 15.170 - 15.265: 98.9005% ( 1) 00:15:22.858 15.360 - 15.455: 98.9081% ( 1) 00:15:22.858 15.550 - 15.644: 98.9156% ( 1) 00:15:22.858 16.498 - 16.593: 98.9231% ( 1) 00:15:22.858 17.256 - 17.351: 98.9306% ( 1) 00:15:22.858 17.351 - 17.446: 98.9457% ( 2) 00:15:22.858 17.446 - 17.541: 98.9683% ( 3) 00:15:22.858 17.541 - 17.636: 99.0059% ( 5) 00:15:22.858 17.636 - 17.730: 99.0285% ( 3) 00:15:22.858 17.730 - 17.825: 99.0662% ( 5) 00:15:22.858 17.825 - 17.920: 99.1566% ( 12) 00:15:22.858 17.920 - 18.015: 99.2469% ( 12) 00:15:22.858 18.015 - 18.110: 99.2921% ( 6) 00:15:22.858 18.110 - 18.204: 99.3599% ( 9) 00:15:22.858 18.204 - 18.299: 99.3975% ( 5) 00:15:22.858 18.299 - 18.394: 99.4503% ( 7) 00:15:22.858 18.394 - 18.489: 99.5105% ( 8) 00:15:22.858 18.489 - 18.584: 99.5557% ( 6) 00:15:22.858 18.584 - 18.679: 99.5708% ( 2) 00:15:22.858 18.679 - 18.773: 99.5858% ( 2) 00:15:22.858 18.773 - 18.868: 99.6385% ( 7) 00:15:22.858 18.868 - 18.963: 99.6536% ( 2) 00:15:22.858 18.963 - 19.058: 99.6837% ( 4) 00:15:22.858 19.058 - 19.153: 99.7063% ( 3) 00:15:22.858 19.153 - 19.247: 99.7138% ( 1) 00:15:22.858 19.247 - 19.342: 99.7289% ( 2) 00:15:22.858 19.342 - 19.437: 99.7440% ( 2) 00:15:22.858 19.721 - 19.816: 99.7590% ( 2) 00:15:22.858 19.816 - 19.911: 99.7665% ( 1) 00:15:22.858 19.911 - 20.006: 99.7741% ( 1) 00:15:22.858 20.006 - 20.101: 99.7816% ( 1) 00:15:22.858 21.713 - 21.807: 99.7891% ( 1) 00:15:22.858 21.807 - 21.902: 99.7967% ( 1) 00:15:22.858 22.850 - 22.945: 99.8042% ( 1) 00:15:22.858 25.031 - 25.221: 99.8117% ( 1) 00:15:22.858 25.221 - 25.410: 99.8268% ( 2) 00:15:22.858 25.790 - 25.979: 99.8343% ( 1) 00:15:22.858 26.738 - 26.927: 99.8494% ( 2) 00:15:22.858 27.117 - 27.307: 99.8569% ( 1) 00:15:22.858 27.876 - 28.065: 99.8644% ( 1) 00:15:22.858 28.065 - 28.255: 99.8720% ( 1) 00:15:22.858 28.255 - 28.444: 99.8795% ( 1) 00:15:22.858 28.444 - 28.634: 99.8946% ( 2) 00:15:22.858 28.634 - 28.824: 99.9021% ( 1) 00:15:22.858 28.824 - 29.013: 99.9096% ( 1) 00:15:22.858 32.427 - 32.616: 99.9172% ( 1) 00:15:22.858 3980.705 - 4004.978: 99.9774% ( 8) 00:15:22.858 4004.978 - 4029.250: 100.0000% ( 3) 00:15:22.858 00:15:22.858 Complete histogram 00:15:22.858 ================== 00:15:22.858 Range in us Cumulative Count 00:15:22.858 2.062 - 2.074: 0.1883% ( 25) 00:15:22.858 2.074 - 2.086: 15.9274% ( 2090) 00:15:22.858 2.086 - 2.098: 31.9226% ( 2124) 00:15:22.858 2.098 - 2.110: 36.4410% ( 600) 00:15:22.858 2.110 - 2.121: 50.0489% ( 1807) 00:15:22.858 2.121 - 2.133: 56.0358% ( 795) 00:15:22.858 2.133 - 2.145: 58.5737% ( 337) 00:15:22.858 2.145 - 2.157: 67.5352% ( 1190) 00:15:22.858 2.157 - 2.169: 71.7750% ( 563) 00:15:22.858 2.169 - 2.181: 73.8836% ( 280) 00:15:22.858 2.181 - 2.193: 78.6580% ( 634) 00:15:22.858 2.193 - 2.204: 80.7290% ( 275) 00:15:22.858 2.204 - 2.216: 81.8812% ( 153) 00:15:22.858 2.216 - 2.228: 85.2700% ( 450) 00:15:22.858 2.228 - 2.240: 87.7928% ( 335) 00:15:22.858 2.240 - 2.252: 89.8411% ( 272) 00:15:22.858 2.252 - 2.264: 92.1380% ( 305) 00:15:22.858 2.264 - 2.276: 93.1019% ( 128) 00:15:22.858 2.276 - 2.287: 93.5010% ( 53) 00:15:22.858 2.287 - 2.299: 93.7721% ( 36) 00:15:22.859 2.299 - 2.311: 94.0809% ( 41) 00:15:22.859 2.311 - 2.323: 94.7059% ( 83) 00:15:22.859 2.323 - 2.335: 95.0072% ( 40) 00:15:22.859 2.335 - 2.347: 95.1051% ( 13) 00:15:22.859 2.347 - 2.359: 95.1653% ( 8) 00:15:22.859 2.359 - 2.370: 95.3234% ( 21) 00:15:22.859 2.370 - 2.382: 95.5870% ( 35) 00:15:22.859 2.382 - 2.394: 95.9937% ( 54) 00:15:22.859 2.394 - 2.406: 96.2798% ( 38) 00:15:22.859 2.406 - 2.418: 96.5359% ( 34) 00:15:22.859 2.418 - 2.430: 96.7166% ( 24) 00:15:22.859 2.430 - 2.441: 96.8672% ( 20) 00:15:22.859 2.441 - 2.453: 96.9953% ( 17) 00:15:22.859 2.453 - 2.465: 97.1609% ( 22) 00:15:22.859 2.465 - 2.477: 97.2890% ( 17) 00:15:22.859 2.477 - 2.489: 97.3944% ( 14) 00:15:22.859 2.489 - 2.501: 97.4848% ( 12) 00:15:22.859 2.501 - 2.513: 97.5676% ( 11) 00:15:22.859 2.513 - 2.524: 97.6052% ( 5) 00:15:22.859 2.524 - 2.536: 97.6203% ( 2) 00:15:22.859 2.536 - 2.548: 97.6580% ( 5) 00:15:22.859 2.548 - 2.560: 97.6956% ( 5) 00:15:22.859 2.560 - 2.572: 97.7031% ( 1) 00:15:22.859 2.572 - 2.584: 97.7257% ( 3) 00:15:22.859 2.584 - 2.596: 97.7408% ( 2) 00:15:22.859 2.596 - 2.607: 97.7483% ( 1) 00:15:22.859 2.607 - 2.619: 97.7634% ( 2) 00:15:22.859 2.631 - 2.643: 97.7860% ( 3) 00:15:22.859 2.643 - 2.655: 97.8010% ( 2) 00:15:22.859 2.655 - 2.667: 97.8086% ( 1) 00:15:22.859 2.667 - 2.679: 97.8161% ( 1) 00:15:22.859 2.679 - 2.690: 97.8387% ( 3) 00:15:22.859 2.690 - 2.702: 97.8462% ( 1) 00:15:22.859 2.714 - 2.726: 97.8538% ( 1) 00:15:22.859 2.738 - 2.750: 97.8613% ( 1) 00:15:22.859 2.750 - 2.761: 97.8763% ( 2) 00:15:22.859 2.761 - 2.773: 97.8839% ( 1) 00:15:22.859 2.773 - 2.785: 97.8989% ( 2) 00:15:22.859 2.785 - 2.797: 97.9065% ( 1) 00:15:22.859 2.797 - 2.809: 97.9140% ( 1) 00:15:22.859 2.809 - 2.821: 97.9441% ( 4) 00:15:22.859 2.833 - 2.844: 97.9517% ( 1) 00:15:22.859 2.868 - 2.880: 97.9592% ( 1) 00:15:22.859 2.880 - 2.892: 97.9742% ( 2) 00:15:22.859 2.892 - 2.904: 97.9893% ( 2) 00:15:22.859 2.927 - 2.939: 97.9968% ( 1) 00:15:22.859 2.939 - 2.951: 98.0044% ( 1) 00:15:22.859 2.951 - 2.963: 98.0119% ( 1) 00:15:22.859 2.975 - 2.987: 98.0194% ( 1) 00:15:22.859 2.987 - 2.999: 98.0345% ( 2) 00:15:22.859 2.999 - 3.010: 98.0420% ( 1) 00:15:22.859 3.010 - 3.022: 98.0646% ( 3) 00:15:22.859 3.022 - 3.034: 98.0721% ( 1) 00:15:22.859 3.034 - 3.058: 98.0797% ( 1) 00:15:22.859 3.058 - 3.081: 98.0872% ( 1) 00:15:22.859 3.105 - 3.129: 98.1098% ( 3) 00:15:22.859 3.129 - 3.153: 98.1475% ( 5) 00:15:22.859 3.153 - 3.176: 98.1700% ( 3) 00:15:22.859 3.200 - 3.224: 98.1851% ( 2) 00:15:22.859 3.224 - 3.247: 98.2077% ( 3) 00:15:22.859 3.247 - 3.271: 98.2303% ( 3) 00:15:22.859 3.271 - 3.295: 98.2378% ( 1) 00:15:22.859 3.295 - 3.319: 98.2679%[2024-07-13 20:03:10.475546] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.116 ( 4) 00:15:23.116 3.319 - 3.342: 98.2830% ( 2) 00:15:23.116 3.342 - 3.366: 98.3282% ( 6) 00:15:23.116 3.390 - 3.413: 98.3432% ( 2) 00:15:23.116 3.413 - 3.437: 98.3583% ( 2) 00:15:23.116 3.437 - 3.461: 98.3809% ( 3) 00:15:23.116 3.461 - 3.484: 98.3960% ( 2) 00:15:23.116 3.484 - 3.508: 98.4186% ( 3) 00:15:23.116 3.508 - 3.532: 98.4336% ( 2) 00:15:23.116 3.532 - 3.556: 98.4411% ( 1) 00:15:23.117 3.556 - 3.579: 98.4713% ( 4) 00:15:23.117 3.579 - 3.603: 98.4788% ( 1) 00:15:23.117 3.603 - 3.627: 98.4863% ( 1) 00:15:23.117 3.627 - 3.650: 98.5014% ( 2) 00:15:23.117 3.650 - 3.674: 98.5089% ( 1) 00:15:23.117 3.674 - 3.698: 98.5165% ( 1) 00:15:23.117 3.698 - 3.721: 98.5240% ( 1) 00:15:23.117 3.745 - 3.769: 98.5315% ( 1) 00:15:23.117 3.769 - 3.793: 98.5390% ( 1) 00:15:23.117 3.793 - 3.816: 98.5541% ( 2) 00:15:23.117 3.816 - 3.840: 98.5616% ( 1) 00:15:23.117 3.864 - 3.887: 98.5692% ( 1) 00:15:23.117 4.006 - 4.030: 98.5767% ( 1) 00:15:23.117 4.030 - 4.053: 98.5842% ( 1) 00:15:23.117 4.101 - 4.124: 98.5918% ( 1) 00:15:23.117 4.219 - 4.243: 98.5993% ( 1) 00:15:23.117 4.314 - 4.338: 98.6068% ( 1) 00:15:23.117 4.504 - 4.527: 98.6144% ( 1) 00:15:23.117 5.096 - 5.120: 98.6219% ( 1) 00:15:23.117 5.120 - 5.144: 98.6294% ( 1) 00:15:23.117 5.144 - 5.167: 98.6369% ( 1) 00:15:23.117 5.333 - 5.357: 98.6445% ( 1) 00:15:23.117 5.452 - 5.476: 98.6520% ( 1) 00:15:23.117 6.021 - 6.044: 98.6595% ( 1) 00:15:23.117 6.068 - 6.116: 98.6746% ( 2) 00:15:23.117 6.116 - 6.163: 98.6821% ( 1) 00:15:23.117 6.210 - 6.258: 98.6972% ( 2) 00:15:23.117 6.258 - 6.305: 98.7047% ( 1) 00:15:23.117 6.353 - 6.400: 98.7123% ( 1) 00:15:23.117 6.495 - 6.542: 98.7198% ( 1) 00:15:23.117 6.637 - 6.684: 98.7273% ( 1) 00:15:23.117 6.827 - 6.874: 98.7348% ( 1) 00:15:23.117 7.490 - 7.538: 98.7424% ( 1) 00:15:23.117 8.344 - 8.391: 98.7499% ( 1) 00:15:23.117 9.671 - 9.719: 98.7574% ( 1) 00:15:23.117 10.098 - 10.145: 98.7650% ( 1) 00:15:23.117 10.951 - 10.999: 98.7725% ( 1) 00:15:23.117 15.550 - 15.644: 98.7951% ( 3) 00:15:23.117 15.644 - 15.739: 98.8177% ( 3) 00:15:23.117 15.739 - 15.834: 98.8252% ( 1) 00:15:23.117 15.834 - 15.929: 98.8629% ( 5) 00:15:23.117 15.929 - 16.024: 98.9005% ( 5) 00:15:23.117 16.024 - 16.119: 98.9306% ( 4) 00:15:23.117 16.119 - 16.213: 98.9608% ( 4) 00:15:23.117 16.213 - 16.308: 98.9834% ( 3) 00:15:23.117 16.308 - 16.403: 99.0135% ( 4) 00:15:23.117 16.403 - 16.498: 99.0361% ( 3) 00:15:23.117 16.498 - 16.593: 99.0662% ( 4) 00:15:23.117 16.593 - 16.687: 99.1038% ( 5) 00:15:23.117 16.687 - 16.782: 99.1641% ( 8) 00:15:23.117 16.782 - 16.877: 99.2017% ( 5) 00:15:23.117 16.877 - 16.972: 99.2394% ( 5) 00:15:23.117 16.972 - 17.067: 99.2695% ( 4) 00:15:23.117 17.067 - 17.161: 99.2771% ( 1) 00:15:23.117 17.256 - 17.351: 99.2996% ( 3) 00:15:23.117 17.351 - 17.446: 99.3072% ( 1) 00:15:23.117 17.446 - 17.541: 99.3298% ( 3) 00:15:23.117 17.541 - 17.636: 99.3373% ( 1) 00:15:23.117 17.825 - 17.920: 99.3448% ( 1) 00:15:23.117 18.015 - 18.110: 99.3524% ( 1) 00:15:23.117 19.247 - 19.342: 99.3674% ( 2) 00:15:23.117 19.437 - 19.532: 99.3750% ( 1) 00:15:23.117 3009.801 - 3021.938: 99.3825% ( 1) 00:15:23.117 3034.074 - 3046.210: 99.3900% ( 1) 00:15:23.117 3058.347 - 3070.483: 99.3975% ( 1) 00:15:23.117 3980.705 - 4004.978: 99.8644% ( 62) 00:15:23.117 4004.978 - 4029.250: 99.9774% ( 15) 00:15:23.117 4029.250 - 4053.523: 99.9849% ( 1) 00:15:23.117 4975.881 - 5000.154: 99.9925% ( 1) 00:15:23.117 5000.154 - 5024.427: 100.0000% ( 1) 00:15:23.117 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.117 [ 00:15:23.117 { 00:15:23.117 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.117 "subtype": "Discovery", 00:15:23.117 "listen_addresses": [], 00:15:23.117 "allow_any_host": true, 00:15:23.117 "hosts": [] 00:15:23.117 }, 00:15:23.117 { 00:15:23.117 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.117 "subtype": "NVMe", 00:15:23.117 "listen_addresses": [ 00:15:23.117 { 00:15:23.117 "trtype": "VFIOUSER", 00:15:23.117 "adrfam": "IPv4", 00:15:23.117 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.117 "trsvcid": "0" 00:15:23.117 } 00:15:23.117 ], 00:15:23.117 "allow_any_host": true, 00:15:23.117 "hosts": [], 00:15:23.117 "serial_number": "SPDK1", 00:15:23.117 "model_number": "SPDK bdev Controller", 00:15:23.117 "max_namespaces": 32, 00:15:23.117 "min_cntlid": 1, 00:15:23.117 "max_cntlid": 65519, 00:15:23.117 "namespaces": [ 00:15:23.117 { 00:15:23.117 "nsid": 1, 00:15:23.117 "bdev_name": "Malloc1", 00:15:23.117 "name": "Malloc1", 00:15:23.117 "nguid": "B18EB594138744E4A340A67E30779285", 00:15:23.117 "uuid": "b18eb594-1387-44e4-a340-a67e30779285" 00:15:23.117 } 00:15:23.117 ] 00:15:23.117 }, 00:15:23.117 { 00:15:23.117 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.117 "subtype": "NVMe", 00:15:23.117 "listen_addresses": [ 00:15:23.117 { 00:15:23.117 "trtype": "VFIOUSER", 00:15:23.117 "adrfam": "IPv4", 00:15:23.117 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.117 "trsvcid": "0" 00:15:23.117 } 00:15:23.117 ], 00:15:23.117 "allow_any_host": true, 00:15:23.117 "hosts": [], 00:15:23.117 "serial_number": "SPDK2", 00:15:23.117 "model_number": "SPDK bdev Controller", 00:15:23.117 "max_namespaces": 32, 00:15:23.117 "min_cntlid": 1, 00:15:23.117 "max_cntlid": 65519, 00:15:23.117 "namespaces": [ 00:15:23.117 { 00:15:23.117 "nsid": 1, 00:15:23.117 "bdev_name": "Malloc2", 00:15:23.117 "name": "Malloc2", 00:15:23.117 "nguid": "7116ADD65C7B4D869DB24A1665F1D588", 00:15:23.117 "uuid": "7116add6-5c7b-4d86-9db2-4a1665f1d588" 00:15:23.117 } 00:15:23.117 ] 00:15:23.117 } 00:15:23.117 ] 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3159403 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:23.117 20:03:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:23.375 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.375 [2024-07-13 20:03:10.923391] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:23.375 Malloc3 00:15:23.375 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:23.639 [2024-07-13 20:03:11.267913] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:23.639 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.937 Asynchronous Event Request test 00:15:23.937 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.937 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:23.937 Registering asynchronous event callbacks... 00:15:23.937 Starting namespace attribute notice tests for all controllers... 00:15:23.937 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.937 aer_cb - Changed Namespace 00:15:23.937 Cleaning up... 00:15:23.937 [ 00:15:23.937 { 00:15:23.937 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.937 "subtype": "Discovery", 00:15:23.937 "listen_addresses": [], 00:15:23.937 "allow_any_host": true, 00:15:23.937 "hosts": [] 00:15:23.937 }, 00:15:23.937 { 00:15:23.937 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.937 "subtype": "NVMe", 00:15:23.937 "listen_addresses": [ 00:15:23.937 { 00:15:23.937 "trtype": "VFIOUSER", 00:15:23.937 "adrfam": "IPv4", 00:15:23.937 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.937 "trsvcid": "0" 00:15:23.937 } 00:15:23.937 ], 00:15:23.937 "allow_any_host": true, 00:15:23.937 "hosts": [], 00:15:23.937 "serial_number": "SPDK1", 00:15:23.937 "model_number": "SPDK bdev Controller", 00:15:23.937 "max_namespaces": 32, 00:15:23.937 "min_cntlid": 1, 00:15:23.937 "max_cntlid": 65519, 00:15:23.937 "namespaces": [ 00:15:23.937 { 00:15:23.937 "nsid": 1, 00:15:23.937 "bdev_name": "Malloc1", 00:15:23.937 "name": "Malloc1", 00:15:23.937 "nguid": "B18EB594138744E4A340A67E30779285", 00:15:23.937 "uuid": "b18eb594-1387-44e4-a340-a67e30779285" 00:15:23.937 }, 00:15:23.937 { 00:15:23.937 "nsid": 2, 00:15:23.937 "bdev_name": "Malloc3", 00:15:23.937 "name": "Malloc3", 00:15:23.937 "nguid": "213B7462827349108A454CEEA02AF82B", 00:15:23.937 "uuid": "213b7462-8273-4910-8a45-4ceea02af82b" 00:15:23.937 } 00:15:23.937 ] 00:15:23.937 }, 00:15:23.937 { 00:15:23.937 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.937 "subtype": "NVMe", 00:15:23.937 "listen_addresses": [ 00:15:23.937 { 00:15:23.937 "trtype": "VFIOUSER", 00:15:23.937 "adrfam": "IPv4", 00:15:23.937 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.937 "trsvcid": "0" 00:15:23.937 } 00:15:23.937 ], 00:15:23.937 "allow_any_host": true, 00:15:23.937 "hosts": [], 00:15:23.937 "serial_number": "SPDK2", 00:15:23.937 "model_number": "SPDK bdev Controller", 00:15:23.937 "max_namespaces": 32, 00:15:23.937 "min_cntlid": 1, 00:15:23.937 "max_cntlid": 65519, 00:15:23.937 "namespaces": [ 00:15:23.937 { 00:15:23.937 "nsid": 1, 00:15:23.937 "bdev_name": "Malloc2", 00:15:23.937 "name": "Malloc2", 00:15:23.937 "nguid": "7116ADD65C7B4D869DB24A1665F1D588", 00:15:23.937 "uuid": "7116add6-5c7b-4d86-9db2-4a1665f1d588" 00:15:23.937 } 00:15:23.937 ] 00:15:23.937 } 00:15:23.937 ] 00:15:23.937 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3159403 00:15:23.937 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.937 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:23.937 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:23.937 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:23.937 [2024-07-13 20:03:11.546839] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:23.937 [2024-07-13 20:03:11.546901] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3159533 ] 00:15:23.937 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.197 [2024-07-13 20:03:11.578476] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:24.197 [2024-07-13 20:03:11.587137] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:24.197 [2024-07-13 20:03:11.587167] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0bec745000 00:15:24.197 [2024-07-13 20:03:11.588136] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.589142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.590157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.591177] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.592181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.593185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.594196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.595204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:24.197 [2024-07-13 20:03:11.596204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:24.197 [2024-07-13 20:03:11.596235] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0beb4fb000 00:15:24.197 [2024-07-13 20:03:11.597500] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:24.197 [2024-07-13 20:03:11.612394] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:24.197 [2024-07-13 20:03:11.612428] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:24.197 [2024-07-13 20:03:11.617520] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:24.197 [2024-07-13 20:03:11.617572] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:24.197 [2024-07-13 20:03:11.617656] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:24.197 [2024-07-13 20:03:11.617679] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:24.197 [2024-07-13 20:03:11.617689] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:24.197 [2024-07-13 20:03:11.618523] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:24.197 [2024-07-13 20:03:11.618547] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:24.197 [2024-07-13 20:03:11.618560] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:24.197 [2024-07-13 20:03:11.619527] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:24.197 [2024-07-13 20:03:11.619552] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:24.197 [2024-07-13 20:03:11.619567] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:24.197 [2024-07-13 20:03:11.620535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:24.197 [2024-07-13 20:03:11.620554] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:24.197 [2024-07-13 20:03:11.621544] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:24.197 [2024-07-13 20:03:11.621563] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:24.197 [2024-07-13 20:03:11.621572] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:24.197 [2024-07-13 20:03:11.621583] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:24.197 [2024-07-13 20:03:11.621692] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:24.197 [2024-07-13 20:03:11.621700] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:24.197 [2024-07-13 20:03:11.621708] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:24.197 [2024-07-13 20:03:11.622549] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:24.197 [2024-07-13 20:03:11.623548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:24.197 [2024-07-13 20:03:11.624557] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:24.197 [2024-07-13 20:03:11.625551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.197 [2024-07-13 20:03:11.625632] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:24.197 [2024-07-13 20:03:11.626568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:24.197 [2024-07-13 20:03:11.626587] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:24.197 [2024-07-13 20:03:11.626596] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:24.197 [2024-07-13 20:03:11.626620] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:24.197 [2024-07-13 20:03:11.626633] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:24.197 [2024-07-13 20:03:11.626656] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.197 [2024-07-13 20:03:11.626665] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.197 [2024-07-13 20:03:11.626683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.197 [2024-07-13 20:03:11.632881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:24.197 [2024-07-13 20:03:11.632911] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:24.197 [2024-07-13 20:03:11.632922] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:24.197 [2024-07-13 20:03:11.632930] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:24.197 [2024-07-13 20:03:11.632938] nvme_ctrlr.c:2004:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:24.197 [2024-07-13 20:03:11.632946] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:24.197 [2024-07-13 20:03:11.632954] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:24.197 [2024-07-13 20:03:11.632962] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:24.197 [2024-07-13 20:03:11.632975] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:24.197 [2024-07-13 20:03:11.632991] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:24.197 [2024-07-13 20:03:11.640878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:24.197 [2024-07-13 20:03:11.640902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.197 [2024-07-13 20:03:11.640916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.198 [2024-07-13 20:03:11.640929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.198 [2024-07-13 20:03:11.640942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:24.198 [2024-07-13 20:03:11.640951] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.640968] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.640983] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.648879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.648897] nvme_ctrlr.c:2892:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:24.198 [2024-07-13 20:03:11.648906] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.648933] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.648947] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.648962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.656889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.656962] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.656983] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.656997] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:24.198 [2024-07-13 20:03:11.657005] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:24.198 [2024-07-13 20:03:11.657015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.664877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.664900] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:24.198 [2024-07-13 20:03:11.664917] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.664931] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.664943] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.198 [2024-07-13 20:03:11.664952] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.198 [2024-07-13 20:03:11.664961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.672876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.672904] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.672921] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.672934] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:24.198 [2024-07-13 20:03:11.672942] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.198 [2024-07-13 20:03:11.672952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.680878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.680899] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.680911] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.680927] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.680938] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.680946] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.680954] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:24.198 [2024-07-13 20:03:11.680962] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:24.198 [2024-07-13 20:03:11.680974] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:24.198 [2024-07-13 20:03:11.681001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.688874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.688901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.696878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.696903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.704891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.704916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.711895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.711921] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:24.198 [2024-07-13 20:03:11.711931] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:24.198 [2024-07-13 20:03:11.711938] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:24.198 [2024-07-13 20:03:11.711944] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:24.198 [2024-07-13 20:03:11.711954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:24.198 [2024-07-13 20:03:11.711965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:24.198 [2024-07-13 20:03:11.711973] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:24.198 [2024-07-13 20:03:11.711983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.711994] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:24.198 [2024-07-13 20:03:11.712002] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:24.198 [2024-07-13 20:03:11.712010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.712022] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:24.198 [2024-07-13 20:03:11.712030] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:24.198 [2024-07-13 20:03:11.712039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:24.198 [2024-07-13 20:03:11.720878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.720905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.720921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:24.198 [2024-07-13 20:03:11.720936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:24.198 ===================================================== 00:15:24.198 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.198 ===================================================== 00:15:24.198 Controller Capabilities/Features 00:15:24.198 ================================ 00:15:24.198 Vendor ID: 4e58 00:15:24.198 Subsystem Vendor ID: 4e58 00:15:24.198 Serial Number: SPDK2 00:15:24.198 Model Number: SPDK bdev Controller 00:15:24.198 Firmware Version: 24.05.1 00:15:24.198 Recommended Arb Burst: 6 00:15:24.198 IEEE OUI Identifier: 8d 6b 50 00:15:24.198 Multi-path I/O 00:15:24.198 May have multiple subsystem ports: Yes 00:15:24.198 May have multiple controllers: Yes 00:15:24.198 Associated with SR-IOV VF: No 00:15:24.198 Max Data Transfer Size: 131072 00:15:24.198 Max Number of Namespaces: 32 00:15:24.198 Max Number of I/O Queues: 127 00:15:24.198 NVMe Specification Version (VS): 1.3 00:15:24.198 NVMe Specification Version (Identify): 1.3 00:15:24.198 Maximum Queue Entries: 256 00:15:24.198 Contiguous Queues Required: Yes 00:15:24.198 Arbitration Mechanisms Supported 00:15:24.198 Weighted Round Robin: Not Supported 00:15:24.198 Vendor Specific: Not Supported 00:15:24.198 Reset Timeout: 15000 ms 00:15:24.198 Doorbell Stride: 4 bytes 00:15:24.198 NVM Subsystem Reset: Not Supported 00:15:24.198 Command Sets Supported 00:15:24.198 NVM Command Set: Supported 00:15:24.198 Boot Partition: Not Supported 00:15:24.198 Memory Page Size Minimum: 4096 bytes 00:15:24.198 Memory Page Size Maximum: 4096 bytes 00:15:24.198 Persistent Memory Region: Not Supported 00:15:24.198 Optional Asynchronous Events Supported 00:15:24.198 Namespace Attribute Notices: Supported 00:15:24.198 Firmware Activation Notices: Not Supported 00:15:24.198 ANA Change Notices: Not Supported 00:15:24.198 PLE Aggregate Log Change Notices: Not Supported 00:15:24.198 LBA Status Info Alert Notices: Not Supported 00:15:24.198 EGE Aggregate Log Change Notices: Not Supported 00:15:24.198 Normal NVM Subsystem Shutdown event: Not Supported 00:15:24.198 Zone Descriptor Change Notices: Not Supported 00:15:24.198 Discovery Log Change Notices: Not Supported 00:15:24.198 Controller Attributes 00:15:24.198 128-bit Host Identifier: Supported 00:15:24.198 Non-Operational Permissive Mode: Not Supported 00:15:24.198 NVM Sets: Not Supported 00:15:24.199 Read Recovery Levels: Not Supported 00:15:24.199 Endurance Groups: Not Supported 00:15:24.199 Predictable Latency Mode: Not Supported 00:15:24.199 Traffic Based Keep ALive: Not Supported 00:15:24.199 Namespace Granularity: Not Supported 00:15:24.199 SQ Associations: Not Supported 00:15:24.199 UUID List: Not Supported 00:15:24.199 Multi-Domain Subsystem: Not Supported 00:15:24.199 Fixed Capacity Management: Not Supported 00:15:24.199 Variable Capacity Management: Not Supported 00:15:24.199 Delete Endurance Group: Not Supported 00:15:24.199 Delete NVM Set: Not Supported 00:15:24.199 Extended LBA Formats Supported: Not Supported 00:15:24.199 Flexible Data Placement Supported: Not Supported 00:15:24.199 00:15:24.199 Controller Memory Buffer Support 00:15:24.199 ================================ 00:15:24.199 Supported: No 00:15:24.199 00:15:24.199 Persistent Memory Region Support 00:15:24.199 ================================ 00:15:24.199 Supported: No 00:15:24.199 00:15:24.199 Admin Command Set Attributes 00:15:24.199 ============================ 00:15:24.199 Security Send/Receive: Not Supported 00:15:24.199 Format NVM: Not Supported 00:15:24.199 Firmware Activate/Download: Not Supported 00:15:24.199 Namespace Management: Not Supported 00:15:24.199 Device Self-Test: Not Supported 00:15:24.199 Directives: Not Supported 00:15:24.199 NVMe-MI: Not Supported 00:15:24.199 Virtualization Management: Not Supported 00:15:24.199 Doorbell Buffer Config: Not Supported 00:15:24.199 Get LBA Status Capability: Not Supported 00:15:24.199 Command & Feature Lockdown Capability: Not Supported 00:15:24.199 Abort Command Limit: 4 00:15:24.199 Async Event Request Limit: 4 00:15:24.199 Number of Firmware Slots: N/A 00:15:24.199 Firmware Slot 1 Read-Only: N/A 00:15:24.199 Firmware Activation Without Reset: N/A 00:15:24.199 Multiple Update Detection Support: N/A 00:15:24.199 Firmware Update Granularity: No Information Provided 00:15:24.199 Per-Namespace SMART Log: No 00:15:24.199 Asymmetric Namespace Access Log Page: Not Supported 00:15:24.199 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:24.199 Command Effects Log Page: Supported 00:15:24.199 Get Log Page Extended Data: Supported 00:15:24.199 Telemetry Log Pages: Not Supported 00:15:24.199 Persistent Event Log Pages: Not Supported 00:15:24.199 Supported Log Pages Log Page: May Support 00:15:24.199 Commands Supported & Effects Log Page: Not Supported 00:15:24.199 Feature Identifiers & Effects Log Page:May Support 00:15:24.199 NVMe-MI Commands & Effects Log Page: May Support 00:15:24.199 Data Area 4 for Telemetry Log: Not Supported 00:15:24.199 Error Log Page Entries Supported: 128 00:15:24.199 Keep Alive: Supported 00:15:24.199 Keep Alive Granularity: 10000 ms 00:15:24.199 00:15:24.199 NVM Command Set Attributes 00:15:24.199 ========================== 00:15:24.199 Submission Queue Entry Size 00:15:24.199 Max: 64 00:15:24.199 Min: 64 00:15:24.199 Completion Queue Entry Size 00:15:24.199 Max: 16 00:15:24.199 Min: 16 00:15:24.199 Number of Namespaces: 32 00:15:24.199 Compare Command: Supported 00:15:24.199 Write Uncorrectable Command: Not Supported 00:15:24.199 Dataset Management Command: Supported 00:15:24.199 Write Zeroes Command: Supported 00:15:24.199 Set Features Save Field: Not Supported 00:15:24.199 Reservations: Not Supported 00:15:24.199 Timestamp: Not Supported 00:15:24.199 Copy: Supported 00:15:24.199 Volatile Write Cache: Present 00:15:24.199 Atomic Write Unit (Normal): 1 00:15:24.199 Atomic Write Unit (PFail): 1 00:15:24.199 Atomic Compare & Write Unit: 1 00:15:24.199 Fused Compare & Write: Supported 00:15:24.199 Scatter-Gather List 00:15:24.199 SGL Command Set: Supported (Dword aligned) 00:15:24.199 SGL Keyed: Not Supported 00:15:24.199 SGL Bit Bucket Descriptor: Not Supported 00:15:24.199 SGL Metadata Pointer: Not Supported 00:15:24.199 Oversized SGL: Not Supported 00:15:24.199 SGL Metadata Address: Not Supported 00:15:24.199 SGL Offset: Not Supported 00:15:24.199 Transport SGL Data Block: Not Supported 00:15:24.199 Replay Protected Memory Block: Not Supported 00:15:24.199 00:15:24.199 Firmware Slot Information 00:15:24.199 ========================= 00:15:24.199 Active slot: 1 00:15:24.199 Slot 1 Firmware Revision: 24.05.1 00:15:24.199 00:15:24.199 00:15:24.199 Commands Supported and Effects 00:15:24.199 ============================== 00:15:24.199 Admin Commands 00:15:24.199 -------------- 00:15:24.199 Get Log Page (02h): Supported 00:15:24.199 Identify (06h): Supported 00:15:24.199 Abort (08h): Supported 00:15:24.199 Set Features (09h): Supported 00:15:24.199 Get Features (0Ah): Supported 00:15:24.199 Asynchronous Event Request (0Ch): Supported 00:15:24.199 Keep Alive (18h): Supported 00:15:24.199 I/O Commands 00:15:24.199 ------------ 00:15:24.199 Flush (00h): Supported LBA-Change 00:15:24.199 Write (01h): Supported LBA-Change 00:15:24.199 Read (02h): Supported 00:15:24.199 Compare (05h): Supported 00:15:24.199 Write Zeroes (08h): Supported LBA-Change 00:15:24.199 Dataset Management (09h): Supported LBA-Change 00:15:24.199 Copy (19h): Supported LBA-Change 00:15:24.199 Unknown (79h): Supported LBA-Change 00:15:24.199 Unknown (7Ah): Supported 00:15:24.199 00:15:24.199 Error Log 00:15:24.199 ========= 00:15:24.199 00:15:24.199 Arbitration 00:15:24.199 =========== 00:15:24.199 Arbitration Burst: 1 00:15:24.199 00:15:24.199 Power Management 00:15:24.199 ================ 00:15:24.199 Number of Power States: 1 00:15:24.199 Current Power State: Power State #0 00:15:24.199 Power State #0: 00:15:24.199 Max Power: 0.00 W 00:15:24.199 Non-Operational State: Operational 00:15:24.199 Entry Latency: Not Reported 00:15:24.199 Exit Latency: Not Reported 00:15:24.199 Relative Read Throughput: 0 00:15:24.199 Relative Read Latency: 0 00:15:24.199 Relative Write Throughput: 0 00:15:24.199 Relative Write Latency: 0 00:15:24.199 Idle Power: Not Reported 00:15:24.199 Active Power: Not Reported 00:15:24.199 Non-Operational Permissive Mode: Not Supported 00:15:24.199 00:15:24.199 Health Information 00:15:24.199 ================== 00:15:24.199 Critical Warnings: 00:15:24.199 Available Spare Space: OK 00:15:24.199 Temperature: OK 00:15:24.199 Device Reliability: OK 00:15:24.199 Read Only: No 00:15:24.199 Volatile Memory Backup: OK 00:15:24.199 Current Temperature: 0 Kelvin[2024-07-13 20:03:11.721054] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:24.199 [2024-07-13 20:03:11.728879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:24.199 [2024-07-13 20:03:11.728921] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:24.199 [2024-07-13 20:03:11.728939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.199 [2024-07-13 20:03:11.728950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.199 [2024-07-13 20:03:11.728959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.199 [2024-07-13 20:03:11.728969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:24.199 [2024-07-13 20:03:11.729046] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:24.199 [2024-07-13 20:03:11.729067] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:24.199 [2024-07-13 20:03:11.730049] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.199 [2024-07-13 20:03:11.730119] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:24.199 [2024-07-13 20:03:11.730134] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:24.199 [2024-07-13 20:03:11.731054] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:24.200 [2024-07-13 20:03:11.731079] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:24.200 [2024-07-13 20:03:11.731131] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:24.200 [2024-07-13 20:03:11.732336] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:24.200 (-273 Celsius) 00:15:24.200 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:24.200 Available Spare: 0% 00:15:24.200 Available Spare Threshold: 0% 00:15:24.200 Life Percentage Used: 0% 00:15:24.200 Data Units Read: 0 00:15:24.200 Data Units Written: 0 00:15:24.200 Host Read Commands: 0 00:15:24.200 Host Write Commands: 0 00:15:24.200 Controller Busy Time: 0 minutes 00:15:24.200 Power Cycles: 0 00:15:24.200 Power On Hours: 0 hours 00:15:24.200 Unsafe Shutdowns: 0 00:15:24.200 Unrecoverable Media Errors: 0 00:15:24.200 Lifetime Error Log Entries: 0 00:15:24.200 Warning Temperature Time: 0 minutes 00:15:24.200 Critical Temperature Time: 0 minutes 00:15:24.200 00:15:24.200 Number of Queues 00:15:24.200 ================ 00:15:24.200 Number of I/O Submission Queues: 127 00:15:24.200 Number of I/O Completion Queues: 127 00:15:24.200 00:15:24.200 Active Namespaces 00:15:24.200 ================= 00:15:24.200 Namespace ID:1 00:15:24.200 Error Recovery Timeout: Unlimited 00:15:24.200 Command Set Identifier: NVM (00h) 00:15:24.200 Deallocate: Supported 00:15:24.200 Deallocated/Unwritten Error: Not Supported 00:15:24.200 Deallocated Read Value: Unknown 00:15:24.200 Deallocate in Write Zeroes: Not Supported 00:15:24.200 Deallocated Guard Field: 0xFFFF 00:15:24.200 Flush: Supported 00:15:24.200 Reservation: Supported 00:15:24.200 Namespace Sharing Capabilities: Multiple Controllers 00:15:24.200 Size (in LBAs): 131072 (0GiB) 00:15:24.200 Capacity (in LBAs): 131072 (0GiB) 00:15:24.200 Utilization (in LBAs): 131072 (0GiB) 00:15:24.200 NGUID: 7116ADD65C7B4D869DB24A1665F1D588 00:15:24.200 UUID: 7116add6-5c7b-4d86-9db2-4a1665f1d588 00:15:24.200 Thin Provisioning: Not Supported 00:15:24.200 Per-NS Atomic Units: Yes 00:15:24.200 Atomic Boundary Size (Normal): 0 00:15:24.200 Atomic Boundary Size (PFail): 0 00:15:24.200 Atomic Boundary Offset: 0 00:15:24.200 Maximum Single Source Range Length: 65535 00:15:24.200 Maximum Copy Length: 65535 00:15:24.200 Maximum Source Range Count: 1 00:15:24.200 NGUID/EUI64 Never Reused: No 00:15:24.200 Namespace Write Protected: No 00:15:24.200 Number of LBA Formats: 1 00:15:24.200 Current LBA Format: LBA Format #00 00:15:24.200 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:24.200 00:15:24.200 20:03:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:24.200 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.458 [2024-07-13 20:03:11.955322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.721 Initializing NVMe Controllers 00:15:29.721 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:29.721 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:29.721 Initialization complete. Launching workers. 00:15:29.721 ======================================================== 00:15:29.721 Latency(us) 00:15:29.721 Device Information : IOPS MiB/s Average min max 00:15:29.721 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35411.73 138.33 3613.93 1157.31 8314.52 00:15:29.721 ======================================================== 00:15:29.721 Total : 35411.73 138.33 3613.93 1157.31 8314.52 00:15:29.721 00:15:29.721 [2024-07-13 20:03:17.058231] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.721 20:03:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:29.721 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.721 [2024-07-13 20:03:17.296895] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.986 Initializing NVMe Controllers 00:15:34.986 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.986 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.986 Initialization complete. Launching workers. 00:15:34.986 ======================================================== 00:15:34.986 Latency(us) 00:15:34.986 Device Information : IOPS MiB/s Average min max 00:15:34.986 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33840.36 132.19 3782.22 1175.47 7621.98 00:15:34.986 ======================================================== 00:15:34.986 Total : 33840.36 132.19 3782.22 1175.47 7621.98 00:15:34.986 00:15:34.986 [2024-07-13 20:03:22.322145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.986 20:03:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:34.986 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.986 [2024-07-13 20:03:22.531636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:40.244 [2024-07-13 20:03:27.665263] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:40.244 Initializing NVMe Controllers 00:15:40.244 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.244 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:40.244 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:40.244 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:40.244 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:40.244 Initialization complete. Launching workers. 00:15:40.244 Starting thread on core 2 00:15:40.244 Starting thread on core 3 00:15:40.244 Starting thread on core 1 00:15:40.244 20:03:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:40.244 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.502 [2024-07-13 20:03:27.969410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.805 [2024-07-13 20:03:31.034235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.805 Initializing NVMe Controllers 00:15:43.805 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.805 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:43.805 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:43.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:43.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:43.805 Initialization complete. Launching workers. 00:15:43.805 Starting thread on core 1 with urgent priority queue 00:15:43.805 Starting thread on core 2 with urgent priority queue 00:15:43.805 Starting thread on core 3 with urgent priority queue 00:15:43.805 Starting thread on core 0 with urgent priority queue 00:15:43.805 SPDK bdev Controller (SPDK2 ) core 0: 4861.67 IO/s 20.57 secs/100000 ios 00:15:43.805 SPDK bdev Controller (SPDK2 ) core 1: 4563.33 IO/s 21.91 secs/100000 ios 00:15:43.805 SPDK bdev Controller (SPDK2 ) core 2: 5027.00 IO/s 19.89 secs/100000 ios 00:15:43.805 SPDK bdev Controller (SPDK2 ) core 3: 4799.00 IO/s 20.84 secs/100000 ios 00:15:43.805 ======================================================== 00:15:43.805 00:15:43.805 20:03:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.805 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.805 [2024-07-13 20:03:31.314395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:43.805 Initializing NVMe Controllers 00:15:43.805 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.805 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.805 Namespace ID: 1 size: 0GB 00:15:43.805 Initialization complete. 00:15:43.805 INFO: using host memory buffer for IO 00:15:43.805 Hello world! 00:15:43.805 [2024-07-13 20:03:31.327579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:43.805 20:03:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:43.805 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.062 [2024-07-13 20:03:31.606316] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.435 Initializing NVMe Controllers 00:15:45.435 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.435 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.435 Initialization complete. Launching workers. 00:15:45.435 submit (in ns) avg, min, max = 7800.7, 3497.8, 4021486.7 00:15:45.435 complete (in ns) avg, min, max = 26402.2, 2057.8, 7202305.6 00:15:45.435 00:15:45.435 Submit histogram 00:15:45.435 ================ 00:15:45.435 Range in us Cumulative Count 00:15:45.435 3.484 - 3.508: 0.0223% ( 3) 00:15:45.435 3.508 - 3.532: 0.3489% ( 44) 00:15:45.435 3.532 - 3.556: 1.6704% ( 178) 00:15:45.435 3.556 - 3.579: 4.7365% ( 413) 00:15:45.435 3.579 - 3.603: 10.8537% ( 824) 00:15:45.435 3.603 - 3.627: 19.8738% ( 1215) 00:15:45.435 3.627 - 3.650: 31.5071% ( 1567) 00:15:45.435 3.650 - 3.674: 40.5791% ( 1222) 00:15:45.435 3.674 - 3.698: 47.9139% ( 988) 00:15:45.435 3.698 - 3.721: 54.1425% ( 839) 00:15:45.435 3.721 - 3.745: 59.9332% ( 780) 00:15:45.435 3.745 - 3.769: 64.2762% ( 585) 00:15:45.435 3.769 - 3.793: 67.9881% ( 500) 00:15:45.435 3.793 - 3.816: 70.5865% ( 350) 00:15:45.435 3.816 - 3.840: 73.7416% ( 425) 00:15:45.435 3.840 - 3.864: 76.9710% ( 435) 00:15:45.435 3.864 - 3.887: 80.8983% ( 529) 00:15:45.435 3.887 - 3.911: 84.1425% ( 437) 00:15:45.435 3.911 - 3.935: 86.4736% ( 314) 00:15:45.435 3.935 - 3.959: 88.1440% ( 225) 00:15:45.435 3.959 - 3.982: 89.5546% ( 190) 00:15:45.435 3.982 - 4.006: 91.0245% ( 198) 00:15:45.435 4.006 - 4.030: 92.2420% ( 164) 00:15:45.435 4.030 - 4.053: 93.2220% ( 132) 00:15:45.435 4.053 - 4.077: 93.9050% ( 92) 00:15:45.435 4.077 - 4.101: 94.5805% ( 91) 00:15:45.435 4.101 - 4.124: 95.0705% ( 66) 00:15:45.435 4.124 - 4.148: 95.4788% ( 55) 00:15:45.435 4.148 - 4.172: 95.7313% ( 34) 00:15:45.435 4.172 - 4.196: 95.8575% ( 17) 00:15:45.435 4.196 - 4.219: 95.9688% ( 15) 00:15:45.435 4.219 - 4.243: 96.0579% ( 12) 00:15:45.435 4.243 - 4.267: 96.1470% ( 12) 00:15:45.435 4.267 - 4.290: 96.2658% ( 16) 00:15:45.435 4.290 - 4.314: 96.3623% ( 13) 00:15:45.435 4.314 - 4.338: 96.3920% ( 4) 00:15:45.435 4.338 - 4.361: 96.4291% ( 5) 00:15:45.435 4.361 - 4.385: 96.4662% ( 5) 00:15:45.435 4.385 - 4.409: 96.5108% ( 6) 00:15:45.435 4.409 - 4.433: 96.5405% ( 4) 00:15:45.435 4.433 - 4.456: 96.5850% ( 6) 00:15:45.435 4.456 - 4.480: 96.6073% ( 3) 00:15:45.435 4.480 - 4.504: 96.6221% ( 2) 00:15:45.435 4.504 - 4.527: 96.6295% ( 1) 00:15:45.435 4.527 - 4.551: 96.6518% ( 3) 00:15:45.435 4.551 - 4.575: 96.6667% ( 2) 00:15:45.435 4.599 - 4.622: 96.6964% ( 4) 00:15:45.435 4.622 - 4.646: 96.7335% ( 5) 00:15:45.435 4.646 - 4.670: 96.7558% ( 3) 00:15:45.435 4.670 - 4.693: 96.7780% ( 3) 00:15:45.435 4.693 - 4.717: 96.8151% ( 5) 00:15:45.435 4.717 - 4.741: 96.8671% ( 7) 00:15:45.435 4.741 - 4.764: 96.9191% ( 7) 00:15:45.435 4.764 - 4.788: 96.9488% ( 4) 00:15:45.435 4.788 - 4.812: 97.0156% ( 9) 00:15:45.435 4.812 - 4.836: 97.0601% ( 6) 00:15:45.435 4.836 - 4.859: 97.0973% ( 5) 00:15:45.435 4.859 - 4.883: 97.1269% ( 4) 00:15:45.435 4.883 - 4.907: 97.1789% ( 7) 00:15:45.435 4.907 - 4.930: 97.2086% ( 4) 00:15:45.435 4.930 - 4.954: 97.2457% ( 5) 00:15:45.435 4.954 - 4.978: 97.2829% ( 5) 00:15:45.435 4.978 - 5.001: 97.3125% ( 4) 00:15:45.435 5.001 - 5.025: 97.3497% ( 5) 00:15:45.435 5.025 - 5.049: 97.3719% ( 3) 00:15:45.435 5.049 - 5.073: 97.4239% ( 7) 00:15:45.435 5.073 - 5.096: 97.4462% ( 3) 00:15:45.435 5.096 - 5.120: 97.4610% ( 2) 00:15:45.435 5.120 - 5.144: 97.4833% ( 3) 00:15:45.435 5.144 - 5.167: 97.5278% ( 6) 00:15:45.435 5.167 - 5.191: 97.5501% ( 3) 00:15:45.435 5.215 - 5.239: 97.5575% ( 1) 00:15:45.435 5.239 - 5.262: 97.5798% ( 3) 00:15:45.435 5.262 - 5.286: 97.6318% ( 7) 00:15:45.435 5.286 - 5.310: 97.6466% ( 2) 00:15:45.435 5.310 - 5.333: 97.6689% ( 3) 00:15:45.435 5.333 - 5.357: 97.6837% ( 2) 00:15:45.435 5.357 - 5.381: 97.7060% ( 3) 00:15:45.435 5.381 - 5.404: 97.7283% ( 3) 00:15:45.435 5.404 - 5.428: 97.7431% ( 2) 00:15:45.435 5.428 - 5.452: 97.7580% ( 2) 00:15:45.435 5.452 - 5.476: 97.7728% ( 2) 00:15:45.435 5.499 - 5.523: 97.7877% ( 2) 00:15:45.435 5.523 - 5.547: 97.8322% ( 6) 00:15:45.435 5.547 - 5.570: 97.8768% ( 6) 00:15:45.435 5.570 - 5.594: 97.8842% ( 1) 00:15:45.435 5.594 - 5.618: 97.9213% ( 5) 00:15:45.435 5.618 - 5.641: 97.9287% ( 1) 00:15:45.435 5.641 - 5.665: 97.9584% ( 4) 00:15:45.435 5.665 - 5.689: 97.9659% ( 1) 00:15:45.435 5.689 - 5.713: 97.9807% ( 2) 00:15:45.435 5.713 - 5.736: 97.9881% ( 1) 00:15:45.435 5.736 - 5.760: 98.0030% ( 2) 00:15:45.435 5.760 - 5.784: 98.0327% ( 4) 00:15:45.435 5.784 - 5.807: 98.0475% ( 2) 00:15:45.435 5.807 - 5.831: 98.0549% ( 1) 00:15:45.435 5.831 - 5.855: 98.0698% ( 2) 00:15:45.435 5.855 - 5.879: 98.0772% ( 1) 00:15:45.435 5.902 - 5.926: 98.0846% ( 1) 00:15:45.435 5.973 - 5.997: 98.0995% ( 2) 00:15:45.435 5.997 - 6.021: 98.1292% ( 4) 00:15:45.435 6.021 - 6.044: 98.1440% ( 2) 00:15:45.435 6.044 - 6.068: 98.1514% ( 1) 00:15:45.435 6.068 - 6.116: 98.1663% ( 2) 00:15:45.435 6.116 - 6.163: 98.1886% ( 3) 00:15:45.435 6.210 - 6.258: 98.1960% ( 1) 00:15:45.435 6.258 - 6.305: 98.2034% ( 1) 00:15:45.435 6.353 - 6.400: 98.2108% ( 1) 00:15:45.435 6.542 - 6.590: 98.2183% ( 1) 00:15:45.435 6.637 - 6.684: 98.2257% ( 1) 00:15:45.435 6.684 - 6.732: 98.2331% ( 1) 00:15:45.435 6.732 - 6.779: 98.2554% ( 3) 00:15:45.435 6.779 - 6.827: 98.2777% ( 3) 00:15:45.435 6.827 - 6.874: 98.2851% ( 1) 00:15:45.435 6.874 - 6.921: 98.2925% ( 1) 00:15:45.435 6.969 - 7.016: 98.2999% ( 1) 00:15:45.435 7.016 - 7.064: 98.3148% ( 2) 00:15:45.435 7.064 - 7.111: 98.3296% ( 2) 00:15:45.435 7.253 - 7.301: 98.3370% ( 1) 00:15:45.435 7.301 - 7.348: 98.3445% ( 1) 00:15:45.435 7.396 - 7.443: 98.3519% ( 1) 00:15:45.435 7.443 - 7.490: 98.3742% ( 3) 00:15:45.435 7.490 - 7.538: 98.3816% ( 1) 00:15:45.435 7.538 - 7.585: 98.4039% ( 3) 00:15:45.435 7.585 - 7.633: 98.4113% ( 1) 00:15:45.435 7.633 - 7.680: 98.4187% ( 1) 00:15:45.435 7.680 - 7.727: 98.4410% ( 3) 00:15:45.435 7.727 - 7.775: 98.4558% ( 2) 00:15:45.435 7.822 - 7.870: 98.4781% ( 3) 00:15:45.436 7.870 - 7.917: 98.4929% ( 2) 00:15:45.436 7.917 - 7.964: 98.5004% ( 1) 00:15:45.436 7.964 - 8.012: 98.5226% ( 3) 00:15:45.436 8.059 - 8.107: 98.5449% ( 3) 00:15:45.436 8.201 - 8.249: 98.5523% ( 1) 00:15:45.436 8.249 - 8.296: 98.5672% ( 2) 00:15:45.436 8.296 - 8.344: 98.5746% ( 1) 00:15:45.436 8.391 - 8.439: 98.5820% ( 1) 00:15:45.436 8.439 - 8.486: 98.5895% ( 1) 00:15:45.436 8.533 - 8.581: 98.5969% ( 1) 00:15:45.436 8.581 - 8.628: 98.6043% ( 1) 00:15:45.436 8.770 - 8.818: 98.6117% ( 1) 00:15:45.436 8.818 - 8.865: 98.6192% ( 1) 00:15:45.436 8.865 - 8.913: 98.6340% ( 2) 00:15:45.436 9.007 - 9.055: 98.6488% ( 2) 00:15:45.436 9.102 - 9.150: 98.6563% ( 1) 00:15:45.436 9.150 - 9.197: 98.6637% ( 1) 00:15:45.436 9.292 - 9.339: 98.6711% ( 1) 00:15:45.436 9.624 - 9.671: 98.6860% ( 2) 00:15:45.436 9.671 - 9.719: 98.6934% ( 1) 00:15:45.436 9.766 - 9.813: 98.7008% ( 1) 00:15:45.436 9.861 - 9.908: 98.7082% ( 1) 00:15:45.436 10.050 - 10.098: 98.7157% ( 1) 00:15:45.436 10.098 - 10.145: 98.7231% ( 1) 00:15:45.436 10.287 - 10.335: 98.7305% ( 1) 00:15:45.436 10.335 - 10.382: 98.7454% ( 2) 00:15:45.436 10.430 - 10.477: 98.7528% ( 1) 00:15:45.436 10.572 - 10.619: 98.7602% ( 1) 00:15:45.436 10.667 - 10.714: 98.7676% ( 1) 00:15:45.436 10.761 - 10.809: 98.7751% ( 1) 00:15:45.436 10.809 - 10.856: 98.7825% ( 1) 00:15:45.436 10.856 - 10.904: 98.7899% ( 1) 00:15:45.436 10.951 - 10.999: 98.7973% ( 1) 00:15:45.436 11.046 - 11.093: 98.8048% ( 1) 00:15:45.436 11.378 - 11.425: 98.8122% ( 1) 00:15:45.436 11.473 - 11.520: 98.8196% ( 1) 00:15:45.436 11.662 - 11.710: 98.8344% ( 2) 00:15:45.436 11.804 - 11.852: 98.8493% ( 2) 00:15:45.436 11.852 - 11.899: 98.8567% ( 1) 00:15:45.436 11.994 - 12.041: 98.8716% ( 2) 00:15:45.436 12.326 - 12.421: 98.8864% ( 2) 00:15:45.436 12.421 - 12.516: 98.8938% ( 1) 00:15:45.436 12.800 - 12.895: 98.9087% ( 2) 00:15:45.436 12.895 - 12.990: 98.9161% ( 1) 00:15:45.436 12.990 - 13.084: 98.9235% ( 1) 00:15:45.436 13.084 - 13.179: 98.9310% ( 1) 00:15:45.436 13.369 - 13.464: 98.9384% ( 1) 00:15:45.436 13.464 - 13.559: 98.9458% ( 1) 00:15:45.436 13.938 - 14.033: 98.9532% ( 1) 00:15:45.436 14.222 - 14.317: 98.9607% ( 1) 00:15:45.436 14.317 - 14.412: 98.9681% ( 1) 00:15:45.436 14.412 - 14.507: 98.9755% ( 1) 00:15:45.436 14.507 - 14.601: 98.9829% ( 1) 00:15:45.436 14.601 - 14.696: 98.9978% ( 2) 00:15:45.436 14.696 - 14.791: 99.0052% ( 1) 00:15:45.436 14.981 - 15.076: 99.0126% ( 1) 00:15:45.436 15.170 - 15.265: 99.0200% ( 1) 00:15:45.436 15.550 - 15.644: 99.0275% ( 1) 00:15:45.436 16.972 - 17.067: 99.0349% ( 1) 00:15:45.436 17.161 - 17.256: 99.0497% ( 2) 00:15:45.436 17.256 - 17.351: 99.0720% ( 3) 00:15:45.436 17.351 - 17.446: 99.1091% ( 5) 00:15:45.436 17.446 - 17.541: 99.1388% ( 4) 00:15:45.436 17.541 - 17.636: 99.1834% ( 6) 00:15:45.436 17.636 - 17.730: 99.2502% ( 9) 00:15:45.436 17.730 - 17.825: 99.2947% ( 6) 00:15:45.436 17.825 - 17.920: 99.3615% ( 9) 00:15:45.436 17.920 - 18.015: 99.4061% ( 6) 00:15:45.436 18.015 - 18.110: 99.4358% ( 4) 00:15:45.436 18.110 - 18.204: 99.5100% ( 10) 00:15:45.436 18.204 - 18.299: 99.5620% ( 7) 00:15:45.436 18.299 - 18.394: 99.5917% ( 4) 00:15:45.436 18.394 - 18.489: 99.6733% ( 11) 00:15:45.436 18.489 - 18.584: 99.7402% ( 9) 00:15:45.436 18.584 - 18.679: 99.7699% ( 4) 00:15:45.436 18.679 - 18.773: 99.7996% ( 4) 00:15:45.436 18.773 - 18.868: 99.8070% ( 1) 00:15:45.436 18.868 - 18.963: 99.8218% ( 2) 00:15:45.436 18.963 - 19.058: 99.8293% ( 1) 00:15:45.436 19.153 - 19.247: 99.8367% ( 1) 00:15:45.436 19.342 - 19.437: 99.8441% ( 1) 00:15:45.436 19.437 - 19.532: 99.8515% ( 1) 00:15:45.436 19.627 - 19.721: 99.8589% ( 1) 00:15:45.436 19.721 - 19.816: 99.8738% ( 2) 00:15:45.436 21.049 - 21.144: 99.8812% ( 1) 00:15:45.436 21.144 - 21.239: 99.8886% ( 1) 00:15:45.436 25.410 - 25.600: 99.8961% ( 1) 00:15:45.436 28.824 - 29.013: 99.9035% ( 1) 00:15:45.436 3980.705 - 4004.978: 99.9852% ( 11) 00:15:45.436 4004.978 - 4029.250: 100.0000% ( 2) 00:15:45.436 00:15:45.436 Complete histogram 00:15:45.436 ================== 00:15:45.436 Range in us Cumulative Count 00:15:45.436 2.050 - 2.062: 0.3341% ( 45) 00:15:45.436 2.062 - 2.074: 26.6592% ( 3546) 00:15:45.436 2.074 - 2.086: 38.2183% ( 1557) 00:15:45.436 2.086 - 2.098: 40.9577% ( 369) 00:15:45.436 2.098 - 2.110: 57.0304% ( 2165) 00:15:45.436 2.110 - 2.121: 61.0987% ( 548) 00:15:45.436 2.121 - 2.133: 64.1054% ( 405) 00:15:45.436 2.133 - 2.145: 74.9740% ( 1464) 00:15:45.436 2.145 - 2.157: 77.5798% ( 351) 00:15:45.436 2.157 - 2.169: 79.9777% ( 323) 00:15:45.436 2.169 - 2.181: 85.1819% ( 701) 00:15:45.436 2.181 - 2.193: 86.7335% ( 209) 00:15:45.436 2.193 - 2.204: 87.6540% ( 124) 00:15:45.436 2.204 - 2.216: 89.8070% ( 290) 00:15:45.436 2.216 - 2.228: 91.3660% ( 210) 00:15:45.436 2.228 - 2.240: 92.9547% ( 214) 00:15:45.436 2.240 - 2.252: 94.1128% ( 156) 00:15:45.436 2.252 - 2.264: 94.4395% ( 44) 00:15:45.436 2.264 - 2.276: 94.6474% ( 28) 00:15:45.436 2.276 - 2.287: 94.7884% ( 19) 00:15:45.436 2.287 - 2.299: 95.0186% ( 31) 00:15:45.436 2.299 - 2.311: 95.2561% ( 32) 00:15:45.436 2.311 - 2.323: 95.3601% ( 14) 00:15:45.436 2.323 - 2.335: 95.4640% ( 14) 00:15:45.436 2.335 - 2.347: 95.5382% ( 10) 00:15:45.436 2.347 - 2.359: 95.6347% ( 13) 00:15:45.436 2.359 - 2.370: 95.8129% ( 24) 00:15:45.436 2.370 - 2.382: 96.1470% ( 45) 00:15:45.436 2.382 - 2.394: 96.4588% ( 42) 00:15:45.436 2.394 - 2.406: 96.7483% ( 39) 00:15:45.436 2.406 - 2.418: 97.0082% ( 35) 00:15:45.436 2.418 - 2.430: 97.1863% ( 24) 00:15:45.436 2.430 - 2.441: 97.3422% ( 21) 00:15:45.436 2.441 - 2.453: 97.4462% ( 14) 00:15:45.436 2.453 - 2.465: 97.4981% ( 7) 00:15:45.436 2.465 - 2.477: 97.5798% ( 11) 00:15:45.436 2.477 - 2.489: 97.5947% ( 2) 00:15:45.436 2.489 - 2.501: 97.6244% ( 4) 00:15:45.436 2.501 - 2.513: 97.6837% ( 8) 00:15:45.436 2.513 - 2.524: 97.7134% ( 4) 00:15:45.436 2.524 - 2.536: 97.7580% ( 6) 00:15:45.436 2.536 - 2.548: 97.7728% ( 2) 00:15:45.436 2.548 - 2.560: 97.7803% ( 1) 00:15:45.436 2.560 - 2.572: 97.7877% ( 1) 00:15:45.436 2.572 - 2.584: 97.7951% ( 1) 00:15:45.436 2.584 - 2.596: 97.8099% ( 2) 00:15:45.436 2.596 - 2.607: 97.8248% ( 2) 00:15:45.436 2.607 - 2.619: 97.8396% ( 2) 00:15:45.436 2.619 - 2.631: 97.8842% ( 6) 00:15:45.436 2.643 - 2.655: 97.8916% ( 1) 00:15:45.436 2.655 - 2.667: 97.9139% ( 3) 00:15:45.436 2.667 - 2.679: 97.9213% ( 1) 00:15:45.436 2.679 - 2.690: 97.9287% ( 1) 00:15:45.436 2.690 - 2.702: 97.9510% ( 3) 00:15:45.436 2.702 - 2.714: 97.9584% ( 1) 00:15:45.436 2.714 - 2.726: 97.9659% ( 1) 00:15:45.436 2.726 - 2.738: 97.9807% ( 2) 00:15:45.436 2.738 - 2.750: 97.9881% ( 1) 00:15:45.436 2.750 - 2.761: 98.0030% ( 2) 00:15:45.436 2.761 - 2.773: 98.0104% ( 1) 00:15:45.436 2.773 - 2.785: 98.0252% ( 2) 00:15:45.436 2.797 - 2.809: 98.0327% ( 1) 00:15:45.436 2.809 - 2.821: 98.0401% ( 1) 00:15:45.436 2.844 - 2.856: 98.0475% ( 1) 00:15:45.436 2.975 - 2.987: 98.0624% ( 2) 00:15:45.436 2.999 - 3.010: 98.0698% ( 1) 00:15:45.436 3.010 - 3.022: 98.0772% ( 1) 00:15:45.436 3.022 - 3.034: 98.0846% ( 1) 00:15:45.436 3.034 - 3.058: 98.1143% ( 4) 00:15:45.436 3.058 - 3.081: 98.1218% ( 1) 00:15:45.436 3.081 - 3.105: 98.1440% ( 3) 00:15:45.436 3.105 - 3.129: 98.1589% ( 2) 00:15:45.436 3.129 - 3.153: 98.1737% ( 2) 00:15:45.436 3.153 - 3.176: 98.1811% ( 1) 00:15:45.436 3.176 - 3.200: 98.1886% ( 1) 00:15:45.436 3.200 - 3.224: 98.2108% ( 3) 00:15:45.436 3.247 - 3.271: 98.2257% ( 2) 00:15:45.436 3.271 - 3.295: 98.2405% ( 2) 00:15:45.436 3.295 - 3.319: 98.2702% ( 4) 00:15:45.436 3.319 - 3.342: 98.2925% ( 3) 00:15:45.436 3.342 - 3.366: 98.2999% ( 1) 00:15:45.436 3.366 - 3.390: 98.3073% ( 1) 00:15:45.436 3.390 - 3.413: 98.3370% ( 4) 00:15:45.436 3.413 - 3.437: 98.3593% ( 3) 00:15:45.436 3.461 - 3.484: 98.3742% ( 2) 00:15:45.436 3.484 - 3.508: 98.4039% ( 4) 00:15:45.436 3.508 - 3.532: 98.4558% ( 7) 00:15:45.436 3.532 - 3.556: 98.4929% ( 5) 00:15:45.436 3.556 - 3.579: 98.5152% ( 3) 00:15:45.436 3.579 - 3.603: 98.5523% ( 5) 00:15:45.436 3.603 - 3.627: 98.5672% ( 2) 00:15:45.436 3.650 - 3.674: 98.5820% ( 2) 00:15:45.436 3.674 - 3.698: 98.5969% ( 2) 00:15:45.436 3.721 - 3.745: 98.6117% ( 2) 00:15:45.436 3.745 - 3.769: 98.6563% ( 6) 00:15:45.436 3.769 - 3.793: 98.6785% ( 3) 00:15:45.436 3.793 - 3.816: 98.6860% ( 1) 00:15:45.436 3.840 - 3.864: 98.7008% ( 2) 00:15:45.436 3.864 - 3.887: 98.7157% ( 2) 00:15:45.436 3.887 - 3.911: 98.7305% ( 2) 00:15:45.436 3.911 - 3.935: 98.7454% ( 2) 00:15:45.436 3.935 - 3.959: 98.7528% ( 1) 00:15:45.436 3.982 - 4.006: 98.7676% ( 2) 00:15:45.436 4.006 - 4.030: 98.7751% ( 1) 00:15:45.436 4.077 - 4.101: 98.7825% ( 1) 00:15:45.436 4.148 - 4.172: 98.7973% ( 2) 00:15:45.436 4.409 - 4.433: 98.8048% ( 1) 00:15:45.436 4.859 - 4.883: 98.8122% ( 1) 00:15:45.436 4.954 - 4.978: 98.8196% ( 1) 00:15:45.436 5.049 - 5.073: 98.8270% ( 1) 00:15:45.436 5.404 - 5.428: 98.8344% ( 1) 00:15:45.436 5.499 - 5.523: 98.8419% ( 1) 00:15:45.436 5.547 - 5.570: 98.8493% ( 1) 00:15:45.436 5.831 - 5.855: 98.8567% ( 1) 00:15:45.436 5.902 - 5.926: 98.8641% ( 1) 00:15:45.436 5.997 - 6.021: 98.8790% ( 2) 00:15:45.436 6.116 - 6.163: 98.8864% ( 1) 00:15:45.436 6.400 - 6.447: 98.9013% ( 2) 00:15:45.436 6.590 - 6.637: 98.9087% ( 1) 00:15:45.436 6.779 - 6.827: 98.9161%[2024-07-13 20:03:32.708621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.436 ( 1) 00:15:45.436 6.874 - 6.921: 98.9235% ( 1) 00:15:45.436 7.111 - 7.159: 98.9310% ( 1) 00:15:45.436 7.443 - 7.490: 98.9384% ( 1) 00:15:45.436 8.960 - 9.007: 98.9458% ( 1) 00:15:45.436 15.644 - 15.739: 98.9607% ( 2) 00:15:45.436 15.739 - 15.834: 98.9755% ( 2) 00:15:45.436 15.834 - 15.929: 98.9978% ( 3) 00:15:45.436 15.929 - 16.024: 99.0275% ( 4) 00:15:45.436 16.024 - 16.119: 99.0423% ( 2) 00:15:45.436 16.119 - 16.213: 99.0572% ( 2) 00:15:45.436 16.403 - 16.498: 99.0869% ( 4) 00:15:45.436 16.498 - 16.593: 99.1314% ( 6) 00:15:45.436 16.593 - 16.687: 99.1759% ( 6) 00:15:45.436 16.687 - 16.782: 99.1982% ( 3) 00:15:45.436 16.782 - 16.877: 99.2428% ( 6) 00:15:45.436 16.877 - 16.972: 99.2947% ( 7) 00:15:45.436 16.972 - 17.067: 99.3244% ( 4) 00:15:45.436 17.067 - 17.161: 99.3467% ( 3) 00:15:45.436 17.161 - 17.256: 99.3615% ( 2) 00:15:45.436 17.256 - 17.351: 99.3690% ( 1) 00:15:45.436 17.541 - 17.636: 99.3764% ( 1) 00:15:45.436 17.920 - 18.015: 99.3838% ( 1) 00:15:45.436 18.299 - 18.394: 99.3912% ( 1) 00:15:45.436 18.679 - 18.773: 99.3987% ( 1) 00:15:45.436 2791.348 - 2803.484: 99.4061% ( 1) 00:15:45.436 3980.705 - 4004.978: 99.9332% ( 71) 00:15:45.436 4004.978 - 4029.250: 99.9852% ( 7) 00:15:45.436 4029.250 - 4053.523: 99.9926% ( 1) 00:15:45.436 7184.687 - 7233.233: 100.0000% ( 1) 00:15:45.436 00:15:45.436 20:03:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:45.436 20:03:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:45.436 20:03:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:45.436 20:03:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:45.436 20:03:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.436 [ 00:15:45.436 { 00:15:45.436 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:45.436 "subtype": "Discovery", 00:15:45.436 "listen_addresses": [], 00:15:45.436 "allow_any_host": true, 00:15:45.436 "hosts": [] 00:15:45.436 }, 00:15:45.436 { 00:15:45.436 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:45.436 "subtype": "NVMe", 00:15:45.436 "listen_addresses": [ 00:15:45.436 { 00:15:45.436 "trtype": "VFIOUSER", 00:15:45.436 "adrfam": "IPv4", 00:15:45.436 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:45.436 "trsvcid": "0" 00:15:45.437 } 00:15:45.437 ], 00:15:45.437 "allow_any_host": true, 00:15:45.437 "hosts": [], 00:15:45.437 "serial_number": "SPDK1", 00:15:45.437 "model_number": "SPDK bdev Controller", 00:15:45.437 "max_namespaces": 32, 00:15:45.437 "min_cntlid": 1, 00:15:45.437 "max_cntlid": 65519, 00:15:45.437 "namespaces": [ 00:15:45.437 { 00:15:45.437 "nsid": 1, 00:15:45.437 "bdev_name": "Malloc1", 00:15:45.437 "name": "Malloc1", 00:15:45.437 "nguid": "B18EB594138744E4A340A67E30779285", 00:15:45.437 "uuid": "b18eb594-1387-44e4-a340-a67e30779285" 00:15:45.437 }, 00:15:45.437 { 00:15:45.437 "nsid": 2, 00:15:45.437 "bdev_name": "Malloc3", 00:15:45.437 "name": "Malloc3", 00:15:45.437 "nguid": "213B7462827349108A454CEEA02AF82B", 00:15:45.437 "uuid": "213b7462-8273-4910-8a45-4ceea02af82b" 00:15:45.437 } 00:15:45.437 ] 00:15:45.437 }, 00:15:45.437 { 00:15:45.437 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:45.437 "subtype": "NVMe", 00:15:45.437 "listen_addresses": [ 00:15:45.437 { 00:15:45.437 "trtype": "VFIOUSER", 00:15:45.437 "adrfam": "IPv4", 00:15:45.437 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:45.437 "trsvcid": "0" 00:15:45.437 } 00:15:45.437 ], 00:15:45.437 "allow_any_host": true, 00:15:45.437 "hosts": [], 00:15:45.437 "serial_number": "SPDK2", 00:15:45.437 "model_number": "SPDK bdev Controller", 00:15:45.437 "max_namespaces": 32, 00:15:45.437 "min_cntlid": 1, 00:15:45.437 "max_cntlid": 65519, 00:15:45.437 "namespaces": [ 00:15:45.437 { 00:15:45.437 "nsid": 1, 00:15:45.437 "bdev_name": "Malloc2", 00:15:45.437 "name": "Malloc2", 00:15:45.437 "nguid": "7116ADD65C7B4D869DB24A1665F1D588", 00:15:45.437 "uuid": "7116add6-5c7b-4d86-9db2-4a1665f1d588" 00:15:45.437 } 00:15:45.437 ] 00:15:45.437 } 00:15:45.437 ] 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3162067 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:45.437 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:45.694 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.694 [2024-07-13 20:03:33.204318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:45.694 Malloc4 00:15:45.694 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:45.952 [2024-07-13 20:03:33.542748] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:45.952 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:45.952 Asynchronous Event Request test 00:15:45.952 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.952 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:45.952 Registering asynchronous event callbacks... 00:15:45.952 Starting namespace attribute notice tests for all controllers... 00:15:45.952 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:45.952 aer_cb - Changed Namespace 00:15:45.952 Cleaning up... 00:15:46.209 [ 00:15:46.209 { 00:15:46.209 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:46.209 "subtype": "Discovery", 00:15:46.209 "listen_addresses": [], 00:15:46.209 "allow_any_host": true, 00:15:46.209 "hosts": [] 00:15:46.209 }, 00:15:46.209 { 00:15:46.209 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:46.209 "subtype": "NVMe", 00:15:46.209 "listen_addresses": [ 00:15:46.209 { 00:15:46.209 "trtype": "VFIOUSER", 00:15:46.209 "adrfam": "IPv4", 00:15:46.209 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:46.209 "trsvcid": "0" 00:15:46.209 } 00:15:46.209 ], 00:15:46.209 "allow_any_host": true, 00:15:46.209 "hosts": [], 00:15:46.209 "serial_number": "SPDK1", 00:15:46.209 "model_number": "SPDK bdev Controller", 00:15:46.209 "max_namespaces": 32, 00:15:46.209 "min_cntlid": 1, 00:15:46.209 "max_cntlid": 65519, 00:15:46.209 "namespaces": [ 00:15:46.209 { 00:15:46.209 "nsid": 1, 00:15:46.209 "bdev_name": "Malloc1", 00:15:46.209 "name": "Malloc1", 00:15:46.209 "nguid": "B18EB594138744E4A340A67E30779285", 00:15:46.209 "uuid": "b18eb594-1387-44e4-a340-a67e30779285" 00:15:46.209 }, 00:15:46.209 { 00:15:46.209 "nsid": 2, 00:15:46.209 "bdev_name": "Malloc3", 00:15:46.209 "name": "Malloc3", 00:15:46.209 "nguid": "213B7462827349108A454CEEA02AF82B", 00:15:46.209 "uuid": "213b7462-8273-4910-8a45-4ceea02af82b" 00:15:46.209 } 00:15:46.209 ] 00:15:46.209 }, 00:15:46.209 { 00:15:46.209 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:46.209 "subtype": "NVMe", 00:15:46.209 "listen_addresses": [ 00:15:46.209 { 00:15:46.209 "trtype": "VFIOUSER", 00:15:46.209 "adrfam": "IPv4", 00:15:46.209 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:46.209 "trsvcid": "0" 00:15:46.209 } 00:15:46.209 ], 00:15:46.209 "allow_any_host": true, 00:15:46.210 "hosts": [], 00:15:46.210 "serial_number": "SPDK2", 00:15:46.210 "model_number": "SPDK bdev Controller", 00:15:46.210 "max_namespaces": 32, 00:15:46.210 "min_cntlid": 1, 00:15:46.210 "max_cntlid": 65519, 00:15:46.210 "namespaces": [ 00:15:46.210 { 00:15:46.210 "nsid": 1, 00:15:46.210 "bdev_name": "Malloc2", 00:15:46.210 "name": "Malloc2", 00:15:46.210 "nguid": "7116ADD65C7B4D869DB24A1665F1D588", 00:15:46.210 "uuid": "7116add6-5c7b-4d86-9db2-4a1665f1d588" 00:15:46.210 }, 00:15:46.210 { 00:15:46.210 "nsid": 2, 00:15:46.210 "bdev_name": "Malloc4", 00:15:46.210 "name": "Malloc4", 00:15:46.210 "nguid": "D9CE38642E854467947E21677E4BE190", 00:15:46.210 "uuid": "d9ce3864-2e85-4467-947e-21677e4be190" 00:15:46.210 } 00:15:46.210 ] 00:15:46.210 } 00:15:46.210 ] 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3162067 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3155843 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3155843 ']' 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3155843 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.210 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3155843 00:15:46.468 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:46.468 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:46.468 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3155843' 00:15:46.468 killing process with pid 3155843 00:15:46.468 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3155843 00:15:46.468 20:03:33 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3155843 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3162210 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3162210' 00:15:46.726 Process pid: 3162210 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3162210 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 3162210 ']' 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:46.726 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:46.726 [2024-07-13 20:03:34.222667] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:46.726 [2024-07-13 20:03:34.223772] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:46.727 [2024-07-13 20:03:34.223840] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.727 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.727 [2024-07-13 20:03:34.288016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.727 [2024-07-13 20:03:34.384380] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.727 [2024-07-13 20:03:34.384441] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.999 [2024-07-13 20:03:34.384458] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.999 [2024-07-13 20:03:34.384472] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.999 [2024-07-13 20:03:34.384484] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.999 [2024-07-13 20:03:34.387889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.999 [2024-07-13 20:03:34.387940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.999 [2024-07-13 20:03:34.391948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.999 [2024-07-13 20:03:34.391953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.999 [2024-07-13 20:03:34.493624] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:46.999 [2024-07-13 20:03:34.493881] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:46.999 [2024-07-13 20:03:34.494159] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:46.999 [2024-07-13 20:03:34.494810] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:46.999 [2024-07-13 20:03:34.495077] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:46.999 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.999 20:03:34 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:15:46.999 20:03:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:47.970 20:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:48.229 20:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:48.229 20:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:48.229 20:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.229 20:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:48.229 20:03:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:48.487 Malloc1 00:15:48.487 20:03:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:48.746 20:03:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:49.003 20:03:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:49.260 20:03:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.260 20:03:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:49.260 20:03:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:49.518 Malloc2 00:15:49.518 20:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:49.775 20:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:50.032 20:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3162210 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 3162210 ']' 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 3162210 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3162210 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3162210' 00:15:50.290 killing process with pid 3162210 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 3162210 00:15:50.290 20:03:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 3162210 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:50.549 00:15:50.549 real 0m52.432s 00:15:50.549 user 3m27.104s 00:15:50.549 sys 0m4.351s 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:50.549 ************************************ 00:15:50.549 END TEST nvmf_vfio_user 00:15:50.549 ************************************ 00:15:50.549 20:03:38 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.549 20:03:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:50.549 20:03:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.549 20:03:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.549 ************************************ 00:15:50.549 START TEST nvmf_vfio_user_nvme_compliance 00:15:50.549 ************************************ 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:50.549 * Looking for test storage... 00:15:50.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3162693 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3162693' 00:15:50.549 Process pid: 3162693 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3162693 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 3162693 ']' 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.549 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:50.808 [2024-07-13 20:03:38.246434] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:15:50.808 [2024-07-13 20:03:38.246514] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.808 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.808 [2024-07-13 20:03:38.304450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.808 [2024-07-13 20:03:38.394258] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.808 [2024-07-13 20:03:38.394323] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.808 [2024-07-13 20:03:38.394338] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.808 [2024-07-13 20:03:38.394363] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.808 [2024-07-13 20:03:38.394373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.808 [2024-07-13 20:03:38.394520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.808 [2024-07-13 20:03:38.394586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.808 [2024-07-13 20:03:38.394589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.066 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:51.066 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:15:51.066 20:03:38 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.999 malloc0 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.999 20:03:39 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:51.999 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.256 00:15:52.256 00:15:52.256 CUnit - A unit testing framework for C - Version 2.1-3 00:15:52.256 http://cunit.sourceforge.net/ 00:15:52.256 00:15:52.256 00:15:52.256 Suite: nvme_compliance 00:15:52.257 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-13 20:03:39.741435] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.257 [2024-07-13 20:03:39.742886] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:52.257 [2024-07-13 20:03:39.742913] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:52.257 [2024-07-13 20:03:39.742927] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:52.257 [2024-07-13 20:03:39.744459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.257 passed 00:15:52.257 Test: admin_identify_ctrlr_verify_fused ...[2024-07-13 20:03:39.833084] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.257 [2024-07-13 20:03:39.836106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.257 passed 00:15:52.514 Test: admin_identify_ns ...[2024-07-13 20:03:39.923672] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.514 [2024-07-13 20:03:39.982886] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:52.514 [2024-07-13 20:03:39.990882] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:52.514 [2024-07-13 20:03:40.012016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.514 passed 00:15:52.514 Test: admin_get_features_mandatory_features ...[2024-07-13 20:03:40.097132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.514 [2024-07-13 20:03:40.100172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.514 passed 00:15:52.771 Test: admin_get_features_optional_features ...[2024-07-13 20:03:40.187752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.771 [2024-07-13 20:03:40.190775] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.771 passed 00:15:52.771 Test: admin_set_features_number_of_queues ...[2024-07-13 20:03:40.276377] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:52.771 [2024-07-13 20:03:40.381122] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:52.771 passed 00:15:53.027 Test: admin_get_log_page_mandatory_logs ...[2024-07-13 20:03:40.465070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.027 [2024-07-13 20:03:40.468100] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.027 passed 00:15:53.027 Test: admin_get_log_page_with_lpo ...[2024-07-13 20:03:40.554392] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.027 [2024-07-13 20:03:40.619897] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:53.027 [2024-07-13 20:03:40.632962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.027 passed 00:15:53.285 Test: fabric_property_get ...[2024-07-13 20:03:40.719453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.285 [2024-07-13 20:03:40.720729] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:53.285 [2024-07-13 20:03:40.722481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.285 passed 00:15:53.285 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-13 20:03:40.807056] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.285 [2024-07-13 20:03:40.808347] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:53.285 [2024-07-13 20:03:40.810084] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.285 passed 00:15:53.285 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-13 20:03:40.897250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.542 [2024-07-13 20:03:40.979873] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.542 [2024-07-13 20:03:40.995891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.542 [2024-07-13 20:03:41.001000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.542 passed 00:15:53.542 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-13 20:03:41.081662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.542 [2024-07-13 20:03:41.082969] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:53.542 [2024-07-13 20:03:41.086695] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.542 passed 00:15:53.542 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-13 20:03:41.169929] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.801 [2024-07-13 20:03:41.247878] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.801 [2024-07-13 20:03:41.271891] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.801 [2024-07-13 20:03:41.276974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.801 passed 00:15:53.801 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-13 20:03:41.360593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:53.801 [2024-07-13 20:03:41.361896] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:53.801 [2024-07-13 20:03:41.361951] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:53.801 [2024-07-13 20:03:41.363619] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:53.801 passed 00:15:53.801 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-13 20:03:41.447461] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.059 [2024-07-13 20:03:41.538879] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:54.059 [2024-07-13 20:03:41.546877] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:54.059 [2024-07-13 20:03:41.554873] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:54.059 [2024-07-13 20:03:41.562876] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:54.059 [2024-07-13 20:03:41.591989] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.059 passed 00:15:54.059 Test: admin_create_io_sq_verify_pc ...[2024-07-13 20:03:41.675717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:54.059 [2024-07-13 20:03:41.692891] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:54.059 [2024-07-13 20:03:41.710179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:54.317 passed 00:15:54.317 Test: admin_create_io_qp_max_qps ...[2024-07-13 20:03:41.795752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:55.251 [2024-07-13 20:03:42.889897] nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:55.816 [2024-07-13 20:03:43.285894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:55.816 passed 00:15:55.816 Test: admin_create_io_sq_shared_cq ...[2024-07-13 20:03:43.369488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.075 [2024-07-13 20:03:43.500873] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:56.075 [2024-07-13 20:03:43.537983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.075 passed 00:15:56.075 00:15:56.075 Run Summary: Type Total Ran Passed Failed Inactive 00:15:56.075 suites 1 1 n/a 0 0 00:15:56.075 tests 18 18 18 0 0 00:15:56.075 asserts 360 360 360 0 n/a 00:15:56.075 00:15:56.075 Elapsed time = 1.575 seconds 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3162693 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 3162693 ']' 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 3162693 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3162693 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3162693' 00:15:56.075 killing process with pid 3162693 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 3162693 00:15:56.075 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 3162693 00:15:56.333 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:56.333 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:56.333 00:15:56.333 real 0m5.741s 00:15:56.333 user 0m16.221s 00:15:56.333 sys 0m0.524s 00:15:56.333 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.333 20:03:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.333 ************************************ 00:15:56.333 END TEST nvmf_vfio_user_nvme_compliance 00:15:56.333 ************************************ 00:15:56.333 20:03:43 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.333 20:03:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:56.333 20:03:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:56.334 20:03:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.334 ************************************ 00:15:56.334 START TEST nvmf_vfio_user_fuzz 00:15:56.334 ************************************ 00:15:56.334 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.334 * Looking for test storage... 00:15:56.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.592 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.593 20:03:43 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3163406 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3163406' 00:15:56.593 Process pid: 3163406 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3163406 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3163406 ']' 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:56.593 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.851 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:56.851 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:15:56.851 20:03:44 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.784 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.784 malloc0 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:57.785 20:03:45 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:29.871 Fuzzing completed. Shutting down the fuzz application 00:16:29.871 00:16:29.871 Dumping successful admin opcodes: 00:16:29.871 8, 9, 10, 24, 00:16:29.871 Dumping successful io opcodes: 00:16:29.871 0, 00:16:29.871 NS: 0x200003a1ef00 I/O qp, Total commands completed: 576142, total successful commands: 2216, random_seed: 2425100736 00:16:29.871 NS: 0x200003a1ef00 admin qp, Total commands completed: 78059, total successful commands: 603, random_seed: 1650207680 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3163406 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3163406 ']' 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 3163406 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3163406 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3163406' 00:16:29.871 killing process with pid 3163406 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 3163406 00:16:29.871 20:04:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 3163406 00:16:29.871 20:04:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:29.871 20:04:16 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:29.871 00:16:29.871 real 0m32.205s 00:16:29.871 user 0m31.073s 00:16:29.871 sys 0m28.987s 00:16:29.871 20:04:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:29.871 20:04:16 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:29.871 ************************************ 00:16:29.871 END TEST nvmf_vfio_user_fuzz 00:16:29.871 ************************************ 00:16:29.871 20:04:16 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:29.871 20:04:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:29.871 20:04:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:29.871 20:04:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.871 ************************************ 00:16:29.871 START TEST nvmf_host_management 00:16:29.871 ************************************ 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:29.871 * Looking for test storage... 00:16:29.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:29.871 20:04:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.809 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:30.810 00:16:30.810 --- 10.0.0.2 ping statistics --- 00:16:30.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.810 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:16:30.810 00:16:30.810 --- 10.0.0.1 ping statistics --- 00:16:30.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.810 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3168851 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3168851 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3168851 ']' 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:30.810 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.067 [2024-07-13 20:04:18.494842] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:31.067 [2024-07-13 20:04:18.494952] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.067 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.067 [2024-07-13 20:04:18.559467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.068 [2024-07-13 20:04:18.646984] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.068 [2024-07-13 20:04:18.647038] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.068 [2024-07-13 20:04:18.647053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.068 [2024-07-13 20:04:18.647065] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.068 [2024-07-13 20:04:18.647076] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.068 [2024-07-13 20:04:18.647160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.068 [2024-07-13 20:04:18.647223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.068 [2024-07-13 20:04:18.647271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.068 [2024-07-13 20:04:18.647274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 [2024-07-13 20:04:18.793441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 Malloc0 00:16:31.326 [2024-07-13 20:04:18.852498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3169017 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3169017 /var/tmp/bdevperf.sock 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 3169017 ']' 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:31.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:31.326 { 00:16:31.326 "params": { 00:16:31.326 "name": "Nvme$subsystem", 00:16:31.326 "trtype": "$TEST_TRANSPORT", 00:16:31.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.326 "adrfam": "ipv4", 00:16:31.326 "trsvcid": "$NVMF_PORT", 00:16:31.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.326 "hdgst": ${hdgst:-false}, 00:16:31.326 "ddgst": ${ddgst:-false} 00:16:31.326 }, 00:16:31.326 "method": "bdev_nvme_attach_controller" 00:16:31.326 } 00:16:31.326 EOF 00:16:31.326 )") 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:31.326 20:04:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:31.326 "params": { 00:16:31.326 "name": "Nvme0", 00:16:31.326 "trtype": "tcp", 00:16:31.326 "traddr": "10.0.0.2", 00:16:31.326 "adrfam": "ipv4", 00:16:31.326 "trsvcid": "4420", 00:16:31.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:31.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:31.326 "hdgst": false, 00:16:31.326 "ddgst": false 00:16:31.326 }, 00:16:31.326 "method": "bdev_nvme_attach_controller" 00:16:31.326 }' 00:16:31.326 [2024-07-13 20:04:18.922439] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:31.326 [2024-07-13 20:04:18.922528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169017 ] 00:16:31.326 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.326 [2024-07-13 20:04:18.983012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.584 [2024-07-13 20:04:19.073353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.842 Running I/O for 10 seconds... 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=65 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 65 -ge 100 ']' 00:16:31.842 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=449 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 449 -ge 100 ']' 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.101 [2024-07-13 20:04:19.683346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 [2024-07-13 20:04:19.683586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfdb980 is same with the state(5) to be set 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.101 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:32.102 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.102 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:32.102 [2024-07-13 20:04:19.692568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.102 [2024-07-13 20:04:19.692612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.692630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.102 [2024-07-13 20:04:19.692644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.692658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.102 [2024-07-13 20:04:19.692673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.692687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:32.102 [2024-07-13 20:04:19.692701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.692715] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b67f00 is same with the state(5) to be set 00:16:32.102 [2024-07-13 20:04:19.693569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.693982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.693997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.102 [2024-07-13 20:04:19.694828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.102 [2024-07-13 20:04:19.694844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.694884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.694912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.694929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.694946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.694962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.694978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.694995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 20:04:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.103 [2024-07-13 20:04:19.695443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 20:04:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:32.103 [2024-07-13 20:04:19.695584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:32.103 [2024-07-13 20:04:19.695803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:32.103 [2024-07-13 20:04:19.695915] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b62330 was disconnected and freed. reset controller. 00:16:32.103 [2024-07-13 20:04:19.697039] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:32.103 task offset: 65536 on job bdev=Nvme0n1 fails 00:16:32.103 00:16:32.103 Latency(us) 00:16:32.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.103 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:32.103 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:32.103 Verification LBA range: start 0x0 length 0x400 00:16:32.103 Nvme0n1 : 0.40 1283.24 80.20 160.40 0.00 43099.72 2767.08 40972.14 00:16:32.103 =================================================================================================================== 00:16:32.103 Total : 1283.24 80.20 160.40 0.00 43099.72 2767.08 40972.14 00:16:32.103 [2024-07-13 20:04:19.698905] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:32.103 [2024-07-13 20:04:19.698936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b67f00 (9): Bad file descriptor 00:16:32.103 [2024-07-13 20:04:19.709657] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3169017 00:16:33.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3169017) - No such process 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:33.476 { 00:16:33.476 "params": { 00:16:33.476 "name": "Nvme$subsystem", 00:16:33.476 "trtype": "$TEST_TRANSPORT", 00:16:33.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.476 "adrfam": "ipv4", 00:16:33.476 "trsvcid": "$NVMF_PORT", 00:16:33.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.476 "hdgst": ${hdgst:-false}, 00:16:33.476 "ddgst": ${ddgst:-false} 00:16:33.476 }, 00:16:33.476 "method": "bdev_nvme_attach_controller" 00:16:33.476 } 00:16:33.476 EOF 00:16:33.476 )") 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:33.476 20:04:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:33.476 "params": { 00:16:33.476 "name": "Nvme0", 00:16:33.476 "trtype": "tcp", 00:16:33.476 "traddr": "10.0.0.2", 00:16:33.476 "adrfam": "ipv4", 00:16:33.476 "trsvcid": "4420", 00:16:33.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:33.476 "hdgst": false, 00:16:33.476 "ddgst": false 00:16:33.476 }, 00:16:33.476 "method": "bdev_nvme_attach_controller" 00:16:33.476 }' 00:16:33.476 [2024-07-13 20:04:20.744671] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:33.476 [2024-07-13 20:04:20.744745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169174 ] 00:16:33.476 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.476 [2024-07-13 20:04:20.804846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.476 [2024-07-13 20:04:20.895434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.734 Running I/O for 1 seconds... 00:16:34.668 00:16:34.668 Latency(us) 00:16:34.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.668 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.668 Verification LBA range: start 0x0 length 0x400 00:16:34.668 Nvme0n1 : 1.05 1163.35 72.71 0.00 0.00 54125.69 12524.66 53205.52 00:16:34.668 =================================================================================================================== 00:16:34.668 Total : 1163.35 72.71 0.00 0.00 54125.69 12524.66 53205.52 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:34.926 rmmod nvme_tcp 00:16:34.926 rmmod nvme_fabrics 00:16:34.926 rmmod nvme_keyring 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3168851 ']' 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3168851 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 3168851 ']' 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 3168851 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:34.926 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3168851 00:16:35.184 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:35.184 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:35.184 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3168851' 00:16:35.184 killing process with pid 3168851 00:16:35.184 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 3168851 00:16:35.184 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 3168851 00:16:35.184 [2024-07-13 20:04:22.823064] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:35.441 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.441 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:35.442 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:35.442 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.442 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:35.442 20:04:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.442 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.442 20:04:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.343 20:04:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:37.343 20:04:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:37.343 00:16:37.343 real 0m8.707s 00:16:37.343 user 0m19.696s 00:16:37.343 sys 0m2.687s 00:16:37.343 20:04:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:37.343 20:04:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:37.343 ************************************ 00:16:37.343 END TEST nvmf_host_management 00:16:37.343 ************************************ 00:16:37.343 20:04:24 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:37.343 20:04:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:37.343 20:04:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.343 20:04:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.343 ************************************ 00:16:37.343 START TEST nvmf_lvol 00:16:37.343 ************************************ 00:16:37.343 20:04:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:37.343 * Looking for test storage... 00:16:37.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.602 20:04:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.505 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:39.506 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:39.506 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:39.506 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:39.506 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:16:39.506 00:16:39.506 --- 10.0.0.2 ping statistics --- 00:16:39.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.506 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:39.506 00:16:39.506 --- 10.0.0.1 ping statistics --- 00:16:39.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.506 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.506 20:04:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3171368 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3171368 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 3171368 ']' 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:39.506 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:39.506 [2024-07-13 20:04:27.055467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:39.506 [2024-07-13 20:04:27.055549] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.506 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.506 [2024-07-13 20:04:27.125789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:39.764 [2024-07-13 20:04:27.216388] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.764 [2024-07-13 20:04:27.216451] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.764 [2024-07-13 20:04:27.216478] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.764 [2024-07-13 20:04:27.216491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.764 [2024-07-13 20:04:27.216503] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.764 [2024-07-13 20:04:27.216595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.764 [2024-07-13 20:04:27.216673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.765 [2024-07-13 20:04:27.216675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.765 20:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:40.022 [2024-07-13 20:04:27.582146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.022 20:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.281 20:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:40.281 20:04:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:40.539 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:40.539 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:40.796 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:41.055 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=61831410-1da2-4536-841a-efa83191fe27 00:16:41.055 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61831410-1da2-4536-841a-efa83191fe27 lvol 20 00:16:41.313 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f9fb26f7-60f9-4bf1-b079-5aa193bd4fe8 00:16:41.313 20:04:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:41.570 20:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f9fb26f7-60f9-4bf1-b079-5aa193bd4fe8 00:16:41.828 20:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:42.119 [2024-07-13 20:04:29.640185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.119 20:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:42.378 20:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3171674 00:16:42.378 20:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:42.378 20:04:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:42.378 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.311 20:04:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f9fb26f7-60f9-4bf1-b079-5aa193bd4fe8 MY_SNAPSHOT 00:16:43.570 20:04:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=38dd228b-188d-4b68-8ff3-7db6df924406 00:16:43.570 20:04:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f9fb26f7-60f9-4bf1-b079-5aa193bd4fe8 30 00:16:44.136 20:04:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 38dd228b-188d-4b68-8ff3-7db6df924406 MY_CLONE 00:16:44.136 20:04:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c9ba8f69-01db-4aab-b9c7-da554cfc35c7 00:16:44.137 20:04:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c9ba8f69-01db-4aab-b9c7-da554cfc35c7 00:16:45.069 20:04:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3171674 00:16:53.172 Initializing NVMe Controllers 00:16:53.172 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:53.172 Controller IO queue size 128, less than required. 00:16:53.172 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:53.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:53.172 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:53.172 Initialization complete. Launching workers. 00:16:53.172 ======================================================== 00:16:53.172 Latency(us) 00:16:53.172 Device Information : IOPS MiB/s Average min max 00:16:53.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10621.30 41.49 12055.55 1628.83 79578.57 00:16:53.172 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10541.70 41.18 12147.22 2973.23 57319.40 00:16:53.172 ======================================================== 00:16:53.172 Total : 21163.00 82.67 12101.21 1628.83 79578.57 00:16:53.172 00:16:53.172 20:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:53.172 20:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f9fb26f7-60f9-4bf1-b079-5aa193bd4fe8 00:16:53.172 20:04:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61831410-1da2-4536-841a-efa83191fe27 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.430 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.430 rmmod nvme_tcp 00:16:53.687 rmmod nvme_fabrics 00:16:53.687 rmmod nvme_keyring 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3171368 ']' 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3171368 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 3171368 ']' 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 3171368 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3171368 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3171368' 00:16:53.687 killing process with pid 3171368 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 3171368 00:16:53.687 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 3171368 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.946 20:04:41 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.847 20:04:43 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:55.847 00:16:55.847 real 0m18.532s 00:16:55.847 user 1m1.938s 00:16:55.847 sys 0m6.183s 00:16:55.847 20:04:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:55.847 20:04:43 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:55.847 ************************************ 00:16:55.847 END TEST nvmf_lvol 00:16:55.847 ************************************ 00:16:56.105 20:04:43 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:56.105 20:04:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:56.105 20:04:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:56.105 20:04:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.105 ************************************ 00:16:56.105 START TEST nvmf_lvs_grow 00:16:56.105 ************************************ 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:56.105 * Looking for test storage... 00:16:56.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.105 20:04:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.005 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.005 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.005 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:58.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:58.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:58.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:58.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:16:58.006 00:16:58.006 --- 10.0.0.2 ping statistics --- 00:16:58.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.006 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:16:58.006 00:16:58.006 --- 10.0.0.1 ping statistics --- 00:16:58.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.006 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.006 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3174933 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3174933 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 3174933 ']' 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:58.264 20:04:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.264 [2024-07-13 20:04:45.729161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:16:58.264 [2024-07-13 20:04:45.729260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.264 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.264 [2024-07-13 20:04:45.801065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.264 [2024-07-13 20:04:45.893644] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.264 [2024-07-13 20:04:45.893693] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.264 [2024-07-13 20:04:45.893722] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.264 [2024-07-13 20:04:45.893734] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.264 [2024-07-13 20:04:45.893744] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.264 [2024-07-13 20:04:45.893769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.523 20:04:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:58.781 [2024-07-13 20:04:46.306000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.781 ************************************ 00:16:58.781 START TEST lvs_grow_clean 00:16:58.781 ************************************ 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:58.781 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:58.782 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:58.782 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:59.040 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:59.040 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:59.299 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:16:59.299 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:16:59.299 20:04:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:59.557 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:59.557 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:59.557 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee lvol 150 00:16:59.816 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=61390e60-f240-40f2-a5b5-4f4df6a9da2c 00:16:59.816 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.816 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:00.074 [2024-07-13 20:04:47.596082] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:00.074 [2024-07-13 20:04:47.596170] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:00.074 true 00:17:00.074 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:00.074 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:00.332 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:00.332 20:04:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:00.591 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61390e60-f240-40f2-a5b5-4f4df6a9da2c 00:17:00.849 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:01.107 [2024-07-13 20:04:48.619167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:01.107 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3175374 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3175374 /var/tmp/bdevperf.sock 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 3175374 ']' 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.366 20:04:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:01.366 [2024-07-13 20:04:48.927912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:01.366 [2024-07-13 20:04:48.927996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175374 ] 00:17:01.366 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.366 [2024-07-13 20:04:48.993274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.651 [2024-07-13 20:04:49.079314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.651 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.651 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:17:01.651 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:02.217 Nvme0n1 00:17:02.217 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:02.217 [ 00:17:02.217 { 00:17:02.218 "name": "Nvme0n1", 00:17:02.218 "aliases": [ 00:17:02.218 "61390e60-f240-40f2-a5b5-4f4df6a9da2c" 00:17:02.218 ], 00:17:02.218 "product_name": "NVMe disk", 00:17:02.218 "block_size": 4096, 00:17:02.218 "num_blocks": 38912, 00:17:02.218 "uuid": "61390e60-f240-40f2-a5b5-4f4df6a9da2c", 00:17:02.218 "assigned_rate_limits": { 00:17:02.218 "rw_ios_per_sec": 0, 00:17:02.218 "rw_mbytes_per_sec": 0, 00:17:02.218 "r_mbytes_per_sec": 0, 00:17:02.218 "w_mbytes_per_sec": 0 00:17:02.218 }, 00:17:02.218 "claimed": false, 00:17:02.218 "zoned": false, 00:17:02.218 "supported_io_types": { 00:17:02.218 "read": true, 00:17:02.218 "write": true, 00:17:02.218 "unmap": true, 00:17:02.218 "write_zeroes": true, 00:17:02.218 "flush": true, 00:17:02.218 "reset": true, 00:17:02.218 "compare": true, 00:17:02.218 "compare_and_write": true, 00:17:02.218 "abort": true, 00:17:02.218 "nvme_admin": true, 00:17:02.218 "nvme_io": true 00:17:02.218 }, 00:17:02.218 "memory_domains": [ 00:17:02.218 { 00:17:02.218 "dma_device_id": "system", 00:17:02.218 "dma_device_type": 1 00:17:02.218 } 00:17:02.218 ], 00:17:02.218 "driver_specific": { 00:17:02.218 "nvme": [ 00:17:02.218 { 00:17:02.218 "trid": { 00:17:02.218 "trtype": "TCP", 00:17:02.218 "adrfam": "IPv4", 00:17:02.218 "traddr": "10.0.0.2", 00:17:02.218 "trsvcid": "4420", 00:17:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:02.218 }, 00:17:02.218 "ctrlr_data": { 00:17:02.218 "cntlid": 1, 00:17:02.218 "vendor_id": "0x8086", 00:17:02.218 "model_number": "SPDK bdev Controller", 00:17:02.218 "serial_number": "SPDK0", 00:17:02.218 "firmware_revision": "24.05.1", 00:17:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:02.218 "oacs": { 00:17:02.218 "security": 0, 00:17:02.218 "format": 0, 00:17:02.218 "firmware": 0, 00:17:02.218 "ns_manage": 0 00:17:02.218 }, 00:17:02.218 "multi_ctrlr": true, 00:17:02.218 "ana_reporting": false 00:17:02.218 }, 00:17:02.218 "vs": { 00:17:02.218 "nvme_version": "1.3" 00:17:02.218 }, 00:17:02.218 "ns_data": { 00:17:02.218 "id": 1, 00:17:02.218 "can_share": true 00:17:02.218 } 00:17:02.218 } 00:17:02.218 ], 00:17:02.218 "mp_policy": "active_passive" 00:17:02.218 } 00:17:02.218 } 00:17:02.218 ] 00:17:02.218 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3175514 00:17:02.218 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:02.218 20:04:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:02.476 Running I/O for 10 seconds... 00:17:03.411 Latency(us) 00:17:03.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.411 Nvme0n1 : 1.00 14135.00 55.21 0.00 0.00 0.00 0.00 0.00 00:17:03.411 =================================================================================================================== 00:17:03.411 Total : 14135.00 55.21 0.00 0.00 0.00 0.00 0.00 00:17:03.411 00:17:04.346 20:04:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:04.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.346 Nvme0n1 : 2.00 14323.00 55.95 0.00 0.00 0.00 0.00 0.00 00:17:04.346 =================================================================================================================== 00:17:04.346 Total : 14323.00 55.95 0.00 0.00 0.00 0.00 0.00 00:17:04.346 00:17:04.604 true 00:17:04.604 20:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:04.604 20:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:04.863 20:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:04.863 20:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:04.863 20:04:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3175514 00:17:05.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.429 Nvme0n1 : 3.00 14366.67 56.12 0.00 0.00 0.00 0.00 0.00 00:17:05.429 =================================================================================================================== 00:17:05.429 Total : 14366.67 56.12 0.00 0.00 0.00 0.00 0.00 00:17:05.429 00:17:06.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.360 Nvme0n1 : 4.00 14451.00 56.45 0.00 0.00 0.00 0.00 0.00 00:17:06.360 =================================================================================================================== 00:17:06.360 Total : 14451.00 56.45 0.00 0.00 0.00 0.00 0.00 00:17:06.360 00:17:07.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.734 Nvme0n1 : 5.00 14501.80 56.65 0.00 0.00 0.00 0.00 0.00 00:17:07.735 =================================================================================================================== 00:17:07.735 Total : 14501.80 56.65 0.00 0.00 0.00 0.00 0.00 00:17:07.735 00:17:08.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.668 Nvme0n1 : 6.00 14546.83 56.82 0.00 0.00 0.00 0.00 0.00 00:17:08.668 =================================================================================================================== 00:17:08.668 Total : 14546.83 56.82 0.00 0.00 0.00 0.00 0.00 00:17:08.668 00:17:09.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.602 Nvme0n1 : 7.00 14596.29 57.02 0.00 0.00 0.00 0.00 0.00 00:17:09.602 =================================================================================================================== 00:17:09.602 Total : 14596.29 57.02 0.00 0.00 0.00 0.00 0.00 00:17:09.602 00:17:10.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.537 Nvme0n1 : 8.00 14697.25 57.41 0.00 0.00 0.00 0.00 0.00 00:17:10.537 =================================================================================================================== 00:17:10.537 Total : 14697.25 57.41 0.00 0.00 0.00 0.00 0.00 00:17:10.537 00:17:11.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.469 Nvme0n1 : 9.00 14789.44 57.77 0.00 0.00 0.00 0.00 0.00 00:17:11.469 =================================================================================================================== 00:17:11.469 Total : 14789.44 57.77 0.00 0.00 0.00 0.00 0.00 00:17:11.469 00:17:12.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.403 Nvme0n1 : 10.00 14818.70 57.89 0.00 0.00 0.00 0.00 0.00 00:17:12.403 =================================================================================================================== 00:17:12.403 Total : 14818.70 57.89 0.00 0.00 0.00 0.00 0.00 00:17:12.403 00:17:12.403 00:17:12.403 Latency(us) 00:17:12.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.403 Nvme0n1 : 10.01 14819.32 57.89 0.00 0.00 8631.47 2924.85 16505.36 00:17:12.403 =================================================================================================================== 00:17:12.403 Total : 14819.32 57.89 0.00 0.00 8631.47 2924.85 16505.36 00:17:12.403 0 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3175374 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 3175374 ']' 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 3175374 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3175374 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3175374' 00:17:12.403 killing process with pid 3175374 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 3175374 00:17:12.403 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.403 00:17:12.403 Latency(us) 00:17:12.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.403 =================================================================================================================== 00:17:12.403 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.403 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 3175374 00:17:12.661 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.918 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:13.484 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:13.484 20:05:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:13.484 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:13.484 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:13.484 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:13.742 [2024-07-13 20:05:01.344659] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:13.742 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:14.000 request: 00:17:14.000 { 00:17:14.000 "uuid": "6c5ed6e9-baec-44ac-a75d-c4432c38d0ee", 00:17:14.000 "method": "bdev_lvol_get_lvstores", 00:17:14.000 "req_id": 1 00:17:14.000 } 00:17:14.000 Got JSON-RPC error response 00:17:14.000 response: 00:17:14.000 { 00:17:14.000 "code": -19, 00:17:14.000 "message": "No such device" 00:17:14.000 } 00:17:14.000 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:14.000 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:14.000 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:14.000 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:14.000 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:14.259 aio_bdev 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 61390e60-f240-40f2-a5b5-4f4df6a9da2c 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=61390e60-f240-40f2-a5b5-4f4df6a9da2c 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:14.259 20:05:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:14.516 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 61390e60-f240-40f2-a5b5-4f4df6a9da2c -t 2000 00:17:14.775 [ 00:17:14.775 { 00:17:14.775 "name": "61390e60-f240-40f2-a5b5-4f4df6a9da2c", 00:17:14.775 "aliases": [ 00:17:14.775 "lvs/lvol" 00:17:14.775 ], 00:17:14.775 "product_name": "Logical Volume", 00:17:14.775 "block_size": 4096, 00:17:14.775 "num_blocks": 38912, 00:17:14.775 "uuid": "61390e60-f240-40f2-a5b5-4f4df6a9da2c", 00:17:14.775 "assigned_rate_limits": { 00:17:14.775 "rw_ios_per_sec": 0, 00:17:14.775 "rw_mbytes_per_sec": 0, 00:17:14.775 "r_mbytes_per_sec": 0, 00:17:14.775 "w_mbytes_per_sec": 0 00:17:14.775 }, 00:17:14.775 "claimed": false, 00:17:14.775 "zoned": false, 00:17:14.775 "supported_io_types": { 00:17:14.775 "read": true, 00:17:14.775 "write": true, 00:17:14.775 "unmap": true, 00:17:14.775 "write_zeroes": true, 00:17:14.775 "flush": false, 00:17:14.775 "reset": true, 00:17:14.775 "compare": false, 00:17:14.775 "compare_and_write": false, 00:17:14.775 "abort": false, 00:17:14.775 "nvme_admin": false, 00:17:14.775 "nvme_io": false 00:17:14.775 }, 00:17:14.775 "driver_specific": { 00:17:14.775 "lvol": { 00:17:14.775 "lvol_store_uuid": "6c5ed6e9-baec-44ac-a75d-c4432c38d0ee", 00:17:14.775 "base_bdev": "aio_bdev", 00:17:14.775 "thin_provision": false, 00:17:14.775 "num_allocated_clusters": 38, 00:17:14.775 "snapshot": false, 00:17:14.775 "clone": false, 00:17:14.775 "esnap_clone": false 00:17:14.775 } 00:17:14.775 } 00:17:14.775 } 00:17:14.775 ] 00:17:14.775 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:17:14.775 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:14.775 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:15.033 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:15.033 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:15.033 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:15.290 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:15.290 20:05:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61390e60-f240-40f2-a5b5-4f4df6a9da2c 00:17:15.547 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c5ed6e9-baec-44ac-a75d-c4432c38d0ee 00:17:15.804 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:16.065 00:17:16.065 real 0m17.350s 00:17:16.065 user 0m16.764s 00:17:16.065 sys 0m1.876s 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:16.065 ************************************ 00:17:16.065 END TEST lvs_grow_clean 00:17:16.065 ************************************ 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:16.065 20:05:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:16.356 ************************************ 00:17:16.356 START TEST lvs_grow_dirty 00:17:16.356 ************************************ 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:16.356 20:05:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:16.356 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:16.356 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:16.614 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:16.614 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:16.614 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:16.871 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:16.871 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:16.871 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 lvol 150 00:17:17.128 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a26296af-39da-43de-8f47-7b00ec12df2b 00:17:17.128 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:17.128 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:17.385 [2024-07-13 20:05:04.963024] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:17.385 [2024-07-13 20:05:04.963126] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:17.385 true 00:17:17.385 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:17.385 20:05:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:17.643 20:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:17.643 20:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:17.900 20:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a26296af-39da-43de-8f47-7b00ec12df2b 00:17:18.158 20:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:18.415 [2024-07-13 20:05:05.974168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.415 20:05:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3177423 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3177423 /var/tmp/bdevperf.sock 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3177423 ']' 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:18.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.673 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:18.673 [2024-07-13 20:05:06.311726] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:18.673 [2024-07-13 20:05:06.311804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3177423 ] 00:17:18.931 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.931 [2024-07-13 20:05:06.374541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.931 [2024-07-13 20:05:06.467048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.931 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.931 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:18.931 20:05:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:19.495 Nvme0n1 00:17:19.495 20:05:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:19.753 [ 00:17:19.753 { 00:17:19.753 "name": "Nvme0n1", 00:17:19.753 "aliases": [ 00:17:19.753 "a26296af-39da-43de-8f47-7b00ec12df2b" 00:17:19.753 ], 00:17:19.753 "product_name": "NVMe disk", 00:17:19.753 "block_size": 4096, 00:17:19.753 "num_blocks": 38912, 00:17:19.753 "uuid": "a26296af-39da-43de-8f47-7b00ec12df2b", 00:17:19.753 "assigned_rate_limits": { 00:17:19.753 "rw_ios_per_sec": 0, 00:17:19.753 "rw_mbytes_per_sec": 0, 00:17:19.753 "r_mbytes_per_sec": 0, 00:17:19.753 "w_mbytes_per_sec": 0 00:17:19.753 }, 00:17:19.753 "claimed": false, 00:17:19.753 "zoned": false, 00:17:19.753 "supported_io_types": { 00:17:19.753 "read": true, 00:17:19.753 "write": true, 00:17:19.753 "unmap": true, 00:17:19.753 "write_zeroes": true, 00:17:19.753 "flush": true, 00:17:19.753 "reset": true, 00:17:19.753 "compare": true, 00:17:19.753 "compare_and_write": true, 00:17:19.753 "abort": true, 00:17:19.753 "nvme_admin": true, 00:17:19.753 "nvme_io": true 00:17:19.753 }, 00:17:19.753 "memory_domains": [ 00:17:19.753 { 00:17:19.753 "dma_device_id": "system", 00:17:19.753 "dma_device_type": 1 00:17:19.753 } 00:17:19.753 ], 00:17:19.753 "driver_specific": { 00:17:19.753 "nvme": [ 00:17:19.753 { 00:17:19.753 "trid": { 00:17:19.753 "trtype": "TCP", 00:17:19.753 "adrfam": "IPv4", 00:17:19.753 "traddr": "10.0.0.2", 00:17:19.753 "trsvcid": "4420", 00:17:19.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:19.753 }, 00:17:19.753 "ctrlr_data": { 00:17:19.753 "cntlid": 1, 00:17:19.753 "vendor_id": "0x8086", 00:17:19.753 "model_number": "SPDK bdev Controller", 00:17:19.753 "serial_number": "SPDK0", 00:17:19.753 "firmware_revision": "24.05.1", 00:17:19.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:19.753 "oacs": { 00:17:19.753 "security": 0, 00:17:19.753 "format": 0, 00:17:19.753 "firmware": 0, 00:17:19.753 "ns_manage": 0 00:17:19.753 }, 00:17:19.753 "multi_ctrlr": true, 00:17:19.753 "ana_reporting": false 00:17:19.753 }, 00:17:19.753 "vs": { 00:17:19.753 "nvme_version": "1.3" 00:17:19.753 }, 00:17:19.753 "ns_data": { 00:17:19.753 "id": 1, 00:17:19.753 "can_share": true 00:17:19.753 } 00:17:19.753 } 00:17:19.753 ], 00:17:19.753 "mp_policy": "active_passive" 00:17:19.753 } 00:17:19.753 } 00:17:19.753 ] 00:17:19.753 20:05:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3177561 00:17:19.753 20:05:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:19.753 20:05:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:19.753 Running I/O for 10 seconds... 00:17:21.125 Latency(us) 00:17:21.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.125 Nvme0n1 : 1.00 14333.00 55.99 0.00 0.00 0.00 0.00 0.00 00:17:21.125 =================================================================================================================== 00:17:21.125 Total : 14333.00 55.99 0.00 0.00 0.00 0.00 0.00 00:17:21.125 00:17:21.691 20:05:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:21.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.949 Nvme0n1 : 2.00 14490.50 56.60 0.00 0.00 0.00 0.00 0.00 00:17:21.949 =================================================================================================================== 00:17:21.949 Total : 14490.50 56.60 0.00 0.00 0.00 0.00 0.00 00:17:21.949 00:17:21.949 true 00:17:21.949 20:05:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:21.949 20:05:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:22.208 20:05:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:22.208 20:05:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:22.208 20:05:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3177561 00:17:22.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.778 Nvme0n1 : 3.00 14764.67 57.67 0.00 0.00 0.00 0.00 0.00 00:17:22.778 =================================================================================================================== 00:17:22.778 Total : 14764.67 57.67 0.00 0.00 0.00 0.00 0.00 00:17:22.778 00:17:24.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.153 Nvme0n1 : 4.00 14734.50 57.56 0.00 0.00 0.00 0.00 0.00 00:17:24.153 =================================================================================================================== 00:17:24.153 Total : 14734.50 57.56 0.00 0.00 0.00 0.00 0.00 00:17:24.153 00:17:24.771 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.771 Nvme0n1 : 5.00 14716.00 57.48 0.00 0.00 0.00 0.00 0.00 00:17:24.771 =================================================================================================================== 00:17:24.771 Total : 14716.00 57.48 0.00 0.00 0.00 0.00 0.00 00:17:24.771 00:17:26.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.147 Nvme0n1 : 6.00 14753.83 57.63 0.00 0.00 0.00 0.00 0.00 00:17:26.147 =================================================================================================================== 00:17:26.147 Total : 14753.83 57.63 0.00 0.00 0.00 0.00 0.00 00:17:26.147 00:17:27.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.086 Nvme0n1 : 7.00 14774.57 57.71 0.00 0.00 0.00 0.00 0.00 00:17:27.086 =================================================================================================================== 00:17:27.086 Total : 14774.57 57.71 0.00 0.00 0.00 0.00 0.00 00:17:27.086 00:17:28.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.028 Nvme0n1 : 8.00 14873.12 58.10 0.00 0.00 0.00 0.00 0.00 00:17:28.028 =================================================================================================================== 00:17:28.028 Total : 14873.12 58.10 0.00 0.00 0.00 0.00 0.00 00:17:28.028 00:17:28.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.967 Nvme0n1 : 9.00 14922.22 58.29 0.00 0.00 0.00 0.00 0.00 00:17:28.967 =================================================================================================================== 00:17:28.967 Total : 14922.22 58.29 0.00 0.00 0.00 0.00 0.00 00:17:28.967 00:17:29.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.975 Nvme0n1 : 10.00 14962.20 58.45 0.00 0.00 0.00 0.00 0.00 00:17:29.975 =================================================================================================================== 00:17:29.975 Total : 14962.20 58.45 0.00 0.00 0.00 0.00 0.00 00:17:29.975 00:17:29.975 00:17:29.975 Latency(us) 00:17:29.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.975 Nvme0n1 : 10.01 14966.37 58.46 0.00 0.00 8547.24 4854.52 17087.91 00:17:29.975 =================================================================================================================== 00:17:29.975 Total : 14966.37 58.46 0.00 0.00 8547.24 4854.52 17087.91 00:17:29.975 0 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3177423 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 3177423 ']' 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 3177423 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3177423 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3177423' 00:17:29.975 killing process with pid 3177423 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 3177423 00:17:29.975 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.975 00:17:29.975 Latency(us) 00:17:29.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.975 =================================================================================================================== 00:17:29.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:29.975 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 3177423 00:17:30.234 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:30.492 20:05:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:30.750 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:30.750 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:31.009 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:31.009 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:31.009 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3174933 00:17:31.009 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3174933 00:17:31.010 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3174933 Killed "${NVMF_APP[@]}" "$@" 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3178881 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3178881 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 3178881 ']' 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.010 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:31.010 [2024-07-13 20:05:18.580076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:31.010 [2024-07-13 20:05:18.580173] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.010 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.010 [2024-07-13 20:05:18.649319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.268 [2024-07-13 20:05:18.736724] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.268 [2024-07-13 20:05:18.736781] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.268 [2024-07-13 20:05:18.736808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.268 [2024-07-13 20:05:18.736820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.268 [2024-07-13 20:05:18.736829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.268 [2024-07-13 20:05:18.736856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.268 20:05:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:31.527 [2024-07-13 20:05:19.104122] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:31.527 [2024-07-13 20:05:19.104300] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:31.527 [2024-07-13 20:05:19.104348] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a26296af-39da-43de-8f47-7b00ec12df2b 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a26296af-39da-43de-8f47-7b00ec12df2b 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:31.527 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.786 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a26296af-39da-43de-8f47-7b00ec12df2b -t 2000 00:17:32.044 [ 00:17:32.044 { 00:17:32.044 "name": "a26296af-39da-43de-8f47-7b00ec12df2b", 00:17:32.044 "aliases": [ 00:17:32.044 "lvs/lvol" 00:17:32.044 ], 00:17:32.044 "product_name": "Logical Volume", 00:17:32.044 "block_size": 4096, 00:17:32.044 "num_blocks": 38912, 00:17:32.044 "uuid": "a26296af-39da-43de-8f47-7b00ec12df2b", 00:17:32.044 "assigned_rate_limits": { 00:17:32.044 "rw_ios_per_sec": 0, 00:17:32.044 "rw_mbytes_per_sec": 0, 00:17:32.044 "r_mbytes_per_sec": 0, 00:17:32.044 "w_mbytes_per_sec": 0 00:17:32.044 }, 00:17:32.044 "claimed": false, 00:17:32.044 "zoned": false, 00:17:32.044 "supported_io_types": { 00:17:32.044 "read": true, 00:17:32.044 "write": true, 00:17:32.044 "unmap": true, 00:17:32.044 "write_zeroes": true, 00:17:32.044 "flush": false, 00:17:32.044 "reset": true, 00:17:32.044 "compare": false, 00:17:32.044 "compare_and_write": false, 00:17:32.044 "abort": false, 00:17:32.044 "nvme_admin": false, 00:17:32.044 "nvme_io": false 00:17:32.044 }, 00:17:32.044 "driver_specific": { 00:17:32.044 "lvol": { 00:17:32.044 "lvol_store_uuid": "1d409bc2-f6c2-463b-a499-dc6dda2e9e78", 00:17:32.044 "base_bdev": "aio_bdev", 00:17:32.044 "thin_provision": false, 00:17:32.044 "num_allocated_clusters": 38, 00:17:32.044 "snapshot": false, 00:17:32.044 "clone": false, 00:17:32.044 "esnap_clone": false 00:17:32.044 } 00:17:32.044 } 00:17:32.044 } 00:17:32.044 ] 00:17:32.044 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:32.044 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:32.044 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:32.302 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:32.302 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:32.302 20:05:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:32.559 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:32.559 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:32.817 [2024-07-13 20:05:20.425408] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:32.817 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:33.074 request: 00:17:33.074 { 00:17:33.074 "uuid": "1d409bc2-f6c2-463b-a499-dc6dda2e9e78", 00:17:33.074 "method": "bdev_lvol_get_lvstores", 00:17:33.074 "req_id": 1 00:17:33.074 } 00:17:33.074 Got JSON-RPC error response 00:17:33.074 response: 00:17:33.074 { 00:17:33.074 "code": -19, 00:17:33.074 "message": "No such device" 00:17:33.074 } 00:17:33.332 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:33.332 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:33.332 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:33.332 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:33.332 20:05:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:33.589 aio_bdev 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a26296af-39da-43de-8f47-7b00ec12df2b 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=a26296af-39da-43de-8f47-7b00ec12df2b 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:33.589 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:33.847 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a26296af-39da-43de-8f47-7b00ec12df2b -t 2000 00:17:33.847 [ 00:17:33.847 { 00:17:33.847 "name": "a26296af-39da-43de-8f47-7b00ec12df2b", 00:17:33.847 "aliases": [ 00:17:33.847 "lvs/lvol" 00:17:33.847 ], 00:17:33.847 "product_name": "Logical Volume", 00:17:33.847 "block_size": 4096, 00:17:33.847 "num_blocks": 38912, 00:17:33.847 "uuid": "a26296af-39da-43de-8f47-7b00ec12df2b", 00:17:33.847 "assigned_rate_limits": { 00:17:33.847 "rw_ios_per_sec": 0, 00:17:33.847 "rw_mbytes_per_sec": 0, 00:17:33.847 "r_mbytes_per_sec": 0, 00:17:33.847 "w_mbytes_per_sec": 0 00:17:33.847 }, 00:17:33.847 "claimed": false, 00:17:33.847 "zoned": false, 00:17:33.847 "supported_io_types": { 00:17:33.847 "read": true, 00:17:33.847 "write": true, 00:17:33.847 "unmap": true, 00:17:33.847 "write_zeroes": true, 00:17:33.847 "flush": false, 00:17:33.847 "reset": true, 00:17:33.847 "compare": false, 00:17:33.847 "compare_and_write": false, 00:17:33.847 "abort": false, 00:17:33.847 "nvme_admin": false, 00:17:33.847 "nvme_io": false 00:17:33.847 }, 00:17:33.847 "driver_specific": { 00:17:33.847 "lvol": { 00:17:33.847 "lvol_store_uuid": "1d409bc2-f6c2-463b-a499-dc6dda2e9e78", 00:17:33.847 "base_bdev": "aio_bdev", 00:17:33.847 "thin_provision": false, 00:17:33.847 "num_allocated_clusters": 38, 00:17:33.847 "snapshot": false, 00:17:33.847 "clone": false, 00:17:33.847 "esnap_clone": false 00:17:33.847 } 00:17:33.847 } 00:17:33.847 } 00:17:33.847 ] 00:17:34.104 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:17:34.104 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:34.104 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:34.362 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:34.362 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:34.362 20:05:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:34.619 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:34.619 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a26296af-39da-43de-8f47-7b00ec12df2b 00:17:34.876 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d409bc2-f6c2-463b-a499-dc6dda2e9e78 00:17:35.134 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:35.392 00:17:35.392 real 0m19.124s 00:17:35.392 user 0m48.275s 00:17:35.392 sys 0m4.654s 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:35.392 ************************************ 00:17:35.392 END TEST lvs_grow_dirty 00:17:35.392 ************************************ 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:35.392 nvmf_trace.0 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:35.392 rmmod nvme_tcp 00:17:35.392 rmmod nvme_fabrics 00:17:35.392 rmmod nvme_keyring 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3178881 ']' 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3178881 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 3178881 ']' 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 3178881 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:35.392 20:05:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3178881 00:17:35.392 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:35.392 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:35.392 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3178881' 00:17:35.392 killing process with pid 3178881 00:17:35.392 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 3178881 00:17:35.392 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 3178881 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.650 20:05:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.179 20:05:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:38.179 00:17:38.179 real 0m41.766s 00:17:38.179 user 1m10.852s 00:17:38.179 sys 0m8.357s 00:17:38.179 20:05:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:38.179 20:05:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:38.179 ************************************ 00:17:38.179 END TEST nvmf_lvs_grow 00:17:38.179 ************************************ 00:17:38.179 20:05:25 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:38.179 20:05:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:38.179 20:05:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:38.179 20:05:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:38.179 ************************************ 00:17:38.179 START TEST nvmf_bdev_io_wait 00:17:38.179 ************************************ 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:38.179 * Looking for test storage... 00:17:38.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.179 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:38.180 20:05:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:40.079 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:17:40.080 00:17:40.080 --- 10.0.0.2 ping statistics --- 00:17:40.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.080 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:17:40.080 00:17:40.080 --- 10.0.0.1 ping statistics --- 00:17:40.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.080 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3181398 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3181398 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 3181398 ']' 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:40.080 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.080 [2024-07-13 20:05:27.648791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:40.080 [2024-07-13 20:05:27.648888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.080 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.080 [2024-07-13 20:05:27.712787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.338 [2024-07-13 20:05:27.799849] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.338 [2024-07-13 20:05:27.799923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.338 [2024-07-13 20:05:27.799938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.338 [2024-07-13 20:05:27.799949] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.338 [2024-07-13 20:05:27.799958] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.338 [2024-07-13 20:05:27.800007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.338 [2024-07-13 20:05:27.800267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.338 [2024-07-13 20:05:27.800328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.338 [2024-07-13 20:05:27.800331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.338 [2024-07-13 20:05:27.965308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.338 20:05:27 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.596 Malloc0 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.596 [2024-07-13 20:05:28.040492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3181545 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3181546 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.596 { 00:17:40.596 "params": { 00:17:40.596 "name": "Nvme$subsystem", 00:17:40.596 "trtype": "$TEST_TRANSPORT", 00:17:40.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.596 "adrfam": "ipv4", 00:17:40.596 "trsvcid": "$NVMF_PORT", 00:17:40.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.596 "hdgst": ${hdgst:-false}, 00:17:40.596 "ddgst": ${ddgst:-false} 00:17:40.596 }, 00:17:40.596 "method": "bdev_nvme_attach_controller" 00:17:40.596 } 00:17:40.596 EOF 00:17:40.596 )") 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3181549 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.596 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.596 { 00:17:40.596 "params": { 00:17:40.596 "name": "Nvme$subsystem", 00:17:40.596 "trtype": "$TEST_TRANSPORT", 00:17:40.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.596 "adrfam": "ipv4", 00:17:40.596 "trsvcid": "$NVMF_PORT", 00:17:40.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.596 "hdgst": ${hdgst:-false}, 00:17:40.596 "ddgst": ${ddgst:-false} 00:17:40.596 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 } 00:17:40.597 EOF 00:17:40.597 )") 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3181552 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.597 { 00:17:40.597 "params": { 00:17:40.597 "name": "Nvme$subsystem", 00:17:40.597 "trtype": "$TEST_TRANSPORT", 00:17:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.597 "adrfam": "ipv4", 00:17:40.597 "trsvcid": "$NVMF_PORT", 00:17:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.597 "hdgst": ${hdgst:-false}, 00:17:40.597 "ddgst": ${ddgst:-false} 00:17:40.597 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 } 00:17:40.597 EOF 00:17:40.597 )") 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.597 { 00:17:40.597 "params": { 00:17:40.597 "name": "Nvme$subsystem", 00:17:40.597 "trtype": "$TEST_TRANSPORT", 00:17:40.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.597 "adrfam": "ipv4", 00:17:40.597 "trsvcid": "$NVMF_PORT", 00:17:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.597 "hdgst": ${hdgst:-false}, 00:17:40.597 "ddgst": ${ddgst:-false} 00:17:40.597 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 } 00:17:40.597 EOF 00:17:40.597 )") 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3181545 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.597 "params": { 00:17:40.597 "name": "Nvme1", 00:17:40.597 "trtype": "tcp", 00:17:40.597 "traddr": "10.0.0.2", 00:17:40.597 "adrfam": "ipv4", 00:17:40.597 "trsvcid": "4420", 00:17:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.597 "hdgst": false, 00:17:40.597 "ddgst": false 00:17:40.597 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 }' 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.597 "params": { 00:17:40.597 "name": "Nvme1", 00:17:40.597 "trtype": "tcp", 00:17:40.597 "traddr": "10.0.0.2", 00:17:40.597 "adrfam": "ipv4", 00:17:40.597 "trsvcid": "4420", 00:17:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.597 "hdgst": false, 00:17:40.597 "ddgst": false 00:17:40.597 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 }' 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.597 "params": { 00:17:40.597 "name": "Nvme1", 00:17:40.597 "trtype": "tcp", 00:17:40.597 "traddr": "10.0.0.2", 00:17:40.597 "adrfam": "ipv4", 00:17:40.597 "trsvcid": "4420", 00:17:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.597 "hdgst": false, 00:17:40.597 "ddgst": false 00:17:40.597 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 }' 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.597 20:05:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.597 "params": { 00:17:40.597 "name": "Nvme1", 00:17:40.597 "trtype": "tcp", 00:17:40.597 "traddr": "10.0.0.2", 00:17:40.597 "adrfam": "ipv4", 00:17:40.597 "trsvcid": "4420", 00:17:40.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.597 "hdgst": false, 00:17:40.597 "ddgst": false 00:17:40.597 }, 00:17:40.597 "method": "bdev_nvme_attach_controller" 00:17:40.597 }' 00:17:40.597 [2024-07-13 20:05:28.086681] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:40.597 [2024-07-13 20:05:28.086682] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:40.597 [2024-07-13 20:05:28.086682] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:40.597 [2024-07-13 20:05:28.086771] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 20:05:28.086770] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 20:05:28.086771] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:40.597 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:40.597 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:40.597 [2024-07-13 20:05:28.087624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:40.597 [2024-07-13 20:05:28.087691] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:40.597 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.597 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.856 [2024-07-13 20:05:28.260535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.856 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.856 [2024-07-13 20:05:28.335506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.856 [2024-07-13 20:05:28.360201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.856 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.856 [2024-07-13 20:05:28.435216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:40.856 [2024-07-13 20:05:28.458096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.114 [2024-07-13 20:05:28.533748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.114 [2024-07-13 20:05:28.537846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:41.114 [2024-07-13 20:05:28.604358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:41.372 Running I/O for 1 seconds... 00:17:41.372 Running I/O for 1 seconds... 00:17:41.372 Running I/O for 1 seconds... 00:17:41.372 Running I/O for 1 seconds... 00:17:42.306 00:17:42.306 Latency(us) 00:17:42.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.306 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:42.306 Nvme1n1 : 1.00 153213.47 598.49 0.00 0.00 832.32 314.03 1426.01 00:17:42.306 =================================================================================================================== 00:17:42.306 Total : 153213.47 598.49 0.00 0.00 832.32 314.03 1426.01 00:17:42.306 00:17:42.306 Latency(us) 00:17:42.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.306 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:42.306 Nvme1n1 : 1.01 8952.48 34.97 0.00 0.00 14230.32 8204.14 27185.30 00:17:42.306 =================================================================================================================== 00:17:42.306 Total : 8952.48 34.97 0.00 0.00 14230.32 8204.14 27185.30 00:17:42.306 00:17:42.306 Latency(us) 00:17:42.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.306 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:42.306 Nvme1n1 : 1.01 9792.19 38.25 0.00 0.00 13007.82 8980.86 24175.50 00:17:42.306 =================================================================================================================== 00:17:42.306 Total : 9792.19 38.25 0.00 0.00 13007.82 8980.86 24175.50 00:17:42.306 00:17:42.306 Latency(us) 00:17:42.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.306 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:42.306 Nvme1n1 : 1.01 9939.15 38.82 0.00 0.00 12839.18 4636.07 25049.32 00:17:42.306 =================================================================================================================== 00:17:42.306 Total : 9939.15 38.82 0.00 0.00 12839.18 4636.07 25049.32 00:17:42.563 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3181546 00:17:42.563 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3181549 00:17:42.563 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3181552 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.821 rmmod nvme_tcp 00:17:42.821 rmmod nvme_fabrics 00:17:42.821 rmmod nvme_keyring 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3181398 ']' 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3181398 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 3181398 ']' 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 3181398 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3181398 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3181398' 00:17:42.821 killing process with pid 3181398 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 3181398 00:17:42.821 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 3181398 00:17:43.079 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.079 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.079 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.079 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.079 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.080 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.080 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.080 20:05:30 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.982 20:05:32 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:44.982 00:17:44.982 real 0m7.218s 00:17:44.982 user 0m15.909s 00:17:44.982 sys 0m3.814s 00:17:44.982 20:05:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:44.982 20:05:32 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.982 ************************************ 00:17:44.982 END TEST nvmf_bdev_io_wait 00:17:44.982 ************************************ 00:17:44.982 20:05:32 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:44.982 20:05:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:44.982 20:05:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:44.982 20:05:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.982 ************************************ 00:17:44.982 START TEST nvmf_queue_depth 00:17:44.982 ************************************ 00:17:44.982 20:05:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:45.240 * Looking for test storage... 00:17:45.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.240 20:05:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.240 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:45.240 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.240 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.241 20:05:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:47.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:47.143 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:47.143 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:47.143 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.143 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.144 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:17:47.402 00:17:47.402 --- 10.0.0.2 ping statistics --- 00:17:47.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.402 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:17:47.402 00:17:47.402 --- 10.0.0.1 ping statistics --- 00:17:47.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.402 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3183764 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3183764 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3183764 ']' 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.402 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.403 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.403 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.403 20:05:34 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.403 [2024-07-13 20:05:34.878209] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:47.403 [2024-07-13 20:05:34.878289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.403 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.403 [2024-07-13 20:05:34.945262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.403 [2024-07-13 20:05:35.035315] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.403 [2024-07-13 20:05:35.035380] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.403 [2024-07-13 20:05:35.035407] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.403 [2024-07-13 20:05:35.035421] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.403 [2024-07-13 20:05:35.035433] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.403 [2024-07-13 20:05:35.035464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 [2024-07-13 20:05:35.183510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 Malloc0 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 [2024-07-13 20:05:35.241821] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3183790 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3183790 /var/tmp/bdevperf.sock 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 3183790 ']' 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.661 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 [2024-07-13 20:05:35.289514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:17:47.661 [2024-07-13 20:05:35.289600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3183790 ] 00:17:47.661 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.920 [2024-07-13 20:05:35.348680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.920 [2024-07-13 20:05:35.434263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.920 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.920 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:17:47.920 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:47.920 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.920 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:48.179 NVMe0n1 00:17:48.179 20:05:35 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.179 20:05:35 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:48.179 Running I/O for 10 seconds... 00:17:58.198 00:17:58.198 Latency(us) 00:17:58.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.198 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:58.198 Verification LBA range: start 0x0 length 0x4000 00:17:58.198 NVMe0n1 : 10.08 8464.93 33.07 0.00 0.00 120358.86 22816.24 75342.13 00:17:58.198 =================================================================================================================== 00:17:58.198 Total : 8464.93 33.07 0.00 0.00 120358.86 22816.24 75342.13 00:17:58.198 0 00:17:58.198 20:05:45 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3183790 00:17:58.198 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3183790 ']' 00:17:58.198 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3183790 00:17:58.198 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3183790 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3183790' 00:17:58.455 killing process with pid 3183790 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3183790 00:17:58.455 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.455 00:17:58.455 Latency(us) 00:17:58.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.455 =================================================================================================================== 00:17:58.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.455 20:05:45 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3183790 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:58.455 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:58.714 rmmod nvme_tcp 00:17:58.714 rmmod nvme_fabrics 00:17:58.714 rmmod nvme_keyring 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3183764 ']' 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3183764 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 3183764 ']' 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 3183764 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3183764 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3183764' 00:17:58.714 killing process with pid 3183764 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 3183764 00:17:58.714 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 3183764 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.971 20:05:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.870 20:05:48 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:00.870 00:18:00.870 real 0m15.895s 00:18:00.870 user 0m22.231s 00:18:00.870 sys 0m3.097s 00:18:00.870 20:05:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:00.870 20:05:48 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:00.870 ************************************ 00:18:00.870 END TEST nvmf_queue_depth 00:18:00.871 ************************************ 00:18:01.129 20:05:48 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:01.129 20:05:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:01.129 20:05:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:01.129 20:05:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:01.129 ************************************ 00:18:01.129 START TEST nvmf_target_multipath 00:18:01.129 ************************************ 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:01.129 * Looking for test storage... 00:18:01.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:01.129 20:05:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:03.029 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:03.029 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:03.029 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:03.029 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.029 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:18:03.334 00:18:03.334 --- 10.0.0.2 ping statistics --- 00:18:03.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.334 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:18:03.334 00:18:03.334 --- 10.0.0.1 ping statistics --- 00:18:03.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.334 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:03.334 only one NIC for nvmf test 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.334 rmmod nvme_tcp 00:18:03.334 rmmod nvme_fabrics 00:18:03.334 rmmod nvme_keyring 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.334 20:05:50 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:05.234 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.493 00:18:05.493 real 0m4.347s 00:18:05.493 user 0m0.773s 00:18:05.493 sys 0m1.569s 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:05.493 20:05:52 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:05.493 ************************************ 00:18:05.493 END TEST nvmf_target_multipath 00:18:05.493 ************************************ 00:18:05.493 20:05:52 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:05.493 20:05:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:05.493 20:05:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:05.493 20:05:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.493 ************************************ 00:18:05.493 START TEST nvmf_zcopy 00:18:05.493 ************************************ 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:05.493 * Looking for test storage... 00:18:05.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.493 20:05:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.493 20:05:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.395 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.395 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.395 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.395 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.395 20:05:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.395 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.395 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.395 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:18:07.654 00:18:07.654 --- 10.0.0.2 ping statistics --- 00:18:07.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.654 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:18:07.654 00:18:07.654 --- 10.0.0.1 ping statistics --- 00:18:07.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.654 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3188884 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3188884 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 3188884 ']' 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:07.654 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.654 [2024-07-13 20:05:55.204558] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:07.654 [2024-07-13 20:05:55.204632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.654 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.654 [2024-07-13 20:05:55.273447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.913 [2024-07-13 20:05:55.368241] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.913 [2024-07-13 20:05:55.368310] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.913 [2024-07-13 20:05:55.368334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.913 [2024-07-13 20:05:55.368349] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.913 [2024-07-13 20:05:55.368361] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.913 [2024-07-13 20:05:55.368393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 [2024-07-13 20:05:55.515925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 [2024-07-13 20:05:55.532099] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 malloc0 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:07.913 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:07.913 { 00:18:07.913 "params": { 00:18:07.913 "name": "Nvme$subsystem", 00:18:07.913 "trtype": "$TEST_TRANSPORT", 00:18:07.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.913 "adrfam": "ipv4", 00:18:07.913 "trsvcid": "$NVMF_PORT", 00:18:07.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.913 "hdgst": ${hdgst:-false}, 00:18:07.913 "ddgst": ${ddgst:-false} 00:18:07.913 }, 00:18:07.913 "method": "bdev_nvme_attach_controller" 00:18:07.913 } 00:18:07.913 EOF 00:18:07.913 )") 00:18:08.172 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:08.172 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:08.172 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:08.172 20:05:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:08.172 "params": { 00:18:08.172 "name": "Nvme1", 00:18:08.172 "trtype": "tcp", 00:18:08.172 "traddr": "10.0.0.2", 00:18:08.172 "adrfam": "ipv4", 00:18:08.172 "trsvcid": "4420", 00:18:08.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.172 "hdgst": false, 00:18:08.172 "ddgst": false 00:18:08.172 }, 00:18:08.172 "method": "bdev_nvme_attach_controller" 00:18:08.172 }' 00:18:08.172 [2024-07-13 20:05:55.612900] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:08.172 [2024-07-13 20:05:55.612977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188980 ] 00:18:08.172 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.172 [2024-07-13 20:05:55.677042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.172 [2024-07-13 20:05:55.772021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.430 Running I/O for 10 seconds... 00:18:20.657 00:18:20.657 Latency(us) 00:18:20.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.657 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:20.657 Verification LBA range: start 0x0 length 0x1000 00:18:20.657 Nvme1n1 : 10.01 5814.76 45.43 0.00 0.00 21952.09 867.75 34564.17 00:18:20.657 =================================================================================================================== 00:18:20.657 Total : 5814.76 45.43 0.00 0.00 21952.09 867.75 34564.17 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3190290 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:20.657 { 00:18:20.657 "params": { 00:18:20.657 "name": "Nvme$subsystem", 00:18:20.657 "trtype": "$TEST_TRANSPORT", 00:18:20.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:20.657 "adrfam": "ipv4", 00:18:20.657 "trsvcid": "$NVMF_PORT", 00:18:20.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:20.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:20.657 "hdgst": ${hdgst:-false}, 00:18:20.657 "ddgst": ${ddgst:-false} 00:18:20.657 }, 00:18:20.657 "method": "bdev_nvme_attach_controller" 00:18:20.657 } 00:18:20.657 EOF 00:18:20.657 )") 00:18:20.657 [2024-07-13 20:06:06.345382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.345429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:20.657 20:06:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:20.657 "params": { 00:18:20.657 "name": "Nvme1", 00:18:20.657 "trtype": "tcp", 00:18:20.657 "traddr": "10.0.0.2", 00:18:20.657 "adrfam": "ipv4", 00:18:20.657 "trsvcid": "4420", 00:18:20.657 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.657 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:20.657 "hdgst": false, 00:18:20.657 "ddgst": false 00:18:20.657 }, 00:18:20.657 "method": "bdev_nvme_attach_controller" 00:18:20.657 }' 00:18:20.657 [2024-07-13 20:06:06.353347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.353376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.361366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.361393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.369388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.369414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.377412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.377437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.385433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.385458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.386296] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:20.657 [2024-07-13 20:06:06.386382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190290 ] 00:18:20.657 [2024-07-13 20:06:06.393457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.393483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.401479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.401503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.409502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.409527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 EAL: No free 2048 kB hugepages reported on node 1 00:18:20.657 [2024-07-13 20:06:06.417527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.417552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.425548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.425573] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.433569] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.433594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.441591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.441616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.449613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.449638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.452090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.657 [2024-07-13 20:06:06.457655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.457685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.465695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.465731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.473682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.473709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.481703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.481729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.489724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.489750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.497749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.497776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.505782] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.505814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.513826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.513863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.521816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.521843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.529836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.529862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.537859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.537894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.545890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.545928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.551978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.657 [2024-07-13 20:06:06.553909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.553947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.561941] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.561963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.569989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.570023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.578007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.578041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.586023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.586059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.594047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.594081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.602065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.602101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.610090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.610126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.618098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.618129] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.626117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.626166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.657 [2024-07-13 20:06:06.634161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.657 [2024-07-13 20:06:06.634194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.642195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.642247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.650163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.650184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.658201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.658226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.666220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.666241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.674268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.674299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.682287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.682315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.690308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.690342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.698336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.698365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.706360] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.706388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.714376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.714403] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.722400] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.722426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.730431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.730456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.738452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.738479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.746475] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.746503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.754500] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.754528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.762520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.762556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.770557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.770583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.778577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.778615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 Running I/O for 5 seconds... 00:18:20.658 [2024-07-13 20:06:06.786589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.786617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.800719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.800749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.811704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.811734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.823270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.823317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.835271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.835300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.846727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.846755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.857928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.857956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.869028] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.869057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.880234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.880263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.891417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.891446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.902171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.902200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.913045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.913074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.923661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.923689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.934605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.934634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.947395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.947423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.957525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.957553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.968750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.968779] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.980366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.980395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:06.991805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:06.991837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.002825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.002853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.013690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.013718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.024786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.024815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.035488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.035516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.046165] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.046210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.057603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.057632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.068724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.068752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.079126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.079154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.089695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.089723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.100632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.100660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.111528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.111557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.124289] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.124317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.133589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.133616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.145079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.145107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.155883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.155927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.166518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.166563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.177263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.177292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.187877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.187922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.199094] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.199123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.210608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.210637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.658 [2024-07-13 20:06:07.221398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.658 [2024-07-13 20:06:07.221426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.232317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.232360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.243427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.243457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.254355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.254384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.267216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.267245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.277413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.277447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.289793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.289822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.301134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.301179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.312201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.312230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.322966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.322994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.333556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.333585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.344833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.344873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.355772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.355811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.367140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.367168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.378244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.378273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.389081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.389109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.400057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.400092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.411234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.411279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.421779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.421811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.432761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.432789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.443848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.443889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.454449] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.454477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.467607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.467634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.477924] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.477952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.489703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.489731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.500893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.500921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.512244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.512272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.522857] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.522892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.533595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.533622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.544661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.544689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.555759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.555786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.566955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.566983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.579453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.579481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.589501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.589528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.600294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.600322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.611060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.611095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.622102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.622130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.632587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.632617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.643602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.643631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.656324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.656363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.670295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.670329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.684623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.684662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.699095] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.699142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.713515] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.713550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.727623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.727656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.741624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.741673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.755358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.755407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.769011] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.769045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.782766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.782802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.796597] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.796632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.807653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.807682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.818317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.818344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.829013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.829040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.840005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.840032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.850542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.850579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.860944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.860971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.871935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.871962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.882811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.882838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.659 [2024-07-13 20:06:07.893606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.659 [2024-07-13 20:06:07.893634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.904006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.904033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.914445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.914472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.927286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.927314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.937256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.937283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.948270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.948298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.958647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.958675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.969319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.969347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.981444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.981472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:07.991437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:07.991465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.002670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.002699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.012711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.012740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.023359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.023387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.033689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.033717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.044471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.044498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.055251] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.055286] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.065426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.065454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.076271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.076300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.086923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.086950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.097220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.097247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.107710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.107737] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.118045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.118072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.129128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.129157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.139840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.139877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.152275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.152302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.161907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.161935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.172395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.172424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.183117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.183145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.193726] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.193754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.204234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.204262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.216533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.216560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.225897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.225925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.237002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.237031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.247765] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.247793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.258543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.258569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.269662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.269689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.282471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.282499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.292236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.292264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.660 [2024-07-13 20:06:08.303392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.660 [2024-07-13 20:06:08.303419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.315744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.315773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.325613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.325641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.336342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.336369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.346792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.346819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.357523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.357550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.368336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.368363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.379425] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.379452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.390214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.390241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.401220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.401248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.412171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.412198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.423484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.423511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.434703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.434730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.445889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.445923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.456562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.456590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.467812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.467839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.478763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.478790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.491583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.491611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.501975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.502002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.512853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.512891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.525206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.525234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.534534] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.534563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.547830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.547858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.557206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.557233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.918 [2024-07-13 20:06:08.568560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:20.918 [2024-07-13 20:06:08.568587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.579658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.579686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.590195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.590223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.601487] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.601514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.612827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.612855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.623599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.623627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.634669] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.634696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.646073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.646101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.657102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.657133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.670606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.670637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.680793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.680821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.691766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.691794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.702693] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.702724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.713963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.713991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.725584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.725611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.736738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.736769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.748329] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.748360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.759356] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.759385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.770476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.770504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.783478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.783506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.793905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.793934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.805641] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.805669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.816322] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.816350] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.176 [2024-07-13 20:06:08.827276] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.176 [2024-07-13 20:06:08.827320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.838454] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.838483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.849855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.849911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.860839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.860875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.871649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.871680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.882489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.882517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.893408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.893436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.904399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.904428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.915439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.915471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.927026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.927057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.938456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.938484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.951039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.951067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.961022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.961050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.972047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.972075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.985039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.985067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:08.994903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:08.994930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.006105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.006133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.016962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.016989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.028036] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.028063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.038925] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.038953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.049733] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.049764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.062255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.062282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.072224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.072252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.434 [2024-07-13 20:06:09.083547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.434 [2024-07-13 20:06:09.083574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.094735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.094771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.106143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.106173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.117285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.117314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.128560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.128588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.141226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.141254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.151141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.151169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.162938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.162966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.173880] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.173908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.185366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.185394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.196472] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.196500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.209090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.209117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.218809] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.218837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.230411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.230439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.241126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.241154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.252138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.252166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.263235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.263278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.274211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.274239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.692 [2024-07-13 20:06:09.287219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.692 [2024-07-13 20:06:09.287247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.693 [2024-07-13 20:06:09.297708] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.693 [2024-07-13 20:06:09.297735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.693 [2024-07-13 20:06:09.308780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.693 [2024-07-13 20:06:09.308833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.693 [2024-07-13 20:06:09.319956] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.693 [2024-07-13 20:06:09.319984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.693 [2024-07-13 20:06:09.331192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.693 [2024-07-13 20:06:09.331219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.693 [2024-07-13 20:06:09.341731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.693 [2024-07-13 20:06:09.341759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.352516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.352544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.363796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.363839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.375295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.375325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.386301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.386329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.397384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.397411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.408512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.408543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.419678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.419707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.430181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.430208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.440657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.440684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.451171] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.451198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.463386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.463414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.472635] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.472663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.483888] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.483923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.494120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.494147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.504832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.504860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.516758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.516794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.526532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.526559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.537913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.537941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.548298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.548326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.559055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.559083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.570081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.570109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.582639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.582666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.592521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.592548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:21.951 [2024-07-13 20:06:09.602648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:21.951 [2024-07-13 20:06:09.602675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.613098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.613126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.623729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.623757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.634124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.634163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.644744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.644771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.657450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.657477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.667345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.667372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.678417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.678444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.688758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.688785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.699447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.699475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.710353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.710381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.720967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.721003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.733498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.733525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.743299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.743329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.754898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.754938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.767514] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.767542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.777416] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.777443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.788890] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.788927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.799286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.799313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.809637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.809664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.819906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.819934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.830408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.830436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.841239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.841267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.851855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.851889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.209 [2024-07-13 20:06:09.862576] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.209 [2024-07-13 20:06:09.862603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.873160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.873187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.885306] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.885335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.894477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.894506] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.907455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.907483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.917394] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.917423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.928293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.928321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.940364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.940392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.949729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.949757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.961082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.961110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.971530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.971558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.981918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.981947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:09.992655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:09.992684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.003071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.003100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.016164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.016196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.026108] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.026149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.036689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.036718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.049355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.049397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.059104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.059140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.070278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.070306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.082399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.082427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.091481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.091508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.103723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.103750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.467 [2024-07-13 20:06:10.114678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.467 [2024-07-13 20:06:10.114706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.125655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.125682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.136066] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.136094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.147124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.147152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.158103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.158131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.169149] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.169177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.180159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.180187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.190876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.190903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.201528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.201555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.212481] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.212509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.223568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.223596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.234496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.234523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.245203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.245230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.256140] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.256167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.267037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.267064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.277926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.277954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.288781] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.288809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.301388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.301415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.310951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.310978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.322300] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.322328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.333130] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.725 [2024-07-13 20:06:10.333157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.725 [2024-07-13 20:06:10.343972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.726 [2024-07-13 20:06:10.344000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.726 [2024-07-13 20:06:10.354657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.726 [2024-07-13 20:06:10.354684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.726 [2024-07-13 20:06:10.365494] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.726 [2024-07-13 20:06:10.365537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.726 [2024-07-13 20:06:10.378029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.726 [2024-07-13 20:06:10.378056] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.388144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.388171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.399443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.399470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.410101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.410128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.421296] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.421323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.432365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.432392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.443208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.443235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.454178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.454205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.465393] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.465423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.476914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.476941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.487883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.487911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.501170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.501197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.511556] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.511583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.522482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.522510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.533316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.533343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.544631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.544658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.556007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.556034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.566763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.566805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.579382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.579410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.589437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.589465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.600675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.600702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.612154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.612182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.622937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.622965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.984 [2024-07-13 20:06:10.633667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.984 [2024-07-13 20:06:10.633695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.644723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.644750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.655929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.655956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.666772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.666800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.677613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.677640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.688643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.688670] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.701776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.701803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.242 [2024-07-13 20:06:10.711600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.242 [2024-07-13 20:06:10.711627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.723068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.723096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.733929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.733957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.745059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.745086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.755934] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.755969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.767261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.767289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.778237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.778266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.789126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.789154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.800085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.800113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.811157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.811184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.824459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.824486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.834375] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.834402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.845265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.845292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.858064] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.858091] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.867480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.867508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.879383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.879411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.243 [2024-07-13 20:06:10.890075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.243 [2024-07-13 20:06:10.890103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.900798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.900843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.911718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.911745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.922661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.922688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.933973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.934000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.944960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.944988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.955927] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.955964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.967004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.967039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.977674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.977702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:10.988338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:10.988365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:11.001099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:11.001126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 [2024-07-13 20:06:11.010424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-13 20:06:11.010452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.022003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.022030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.032746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.032774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.043209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.043237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.054233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.054262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.065068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.065096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.076150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.076178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.087065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.087093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.100263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.100308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.110134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.110162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.122053] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.122082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.133073] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.133101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.146121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.146148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-13 20:06:11.155616] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-13 20:06:11.155644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.758 [2024-07-13 20:06:11.167158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.758 [2024-07-13 20:06:11.167186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.758 [2024-07-13 20:06:11.179818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.179852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.197973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.198002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.208595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.208622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.219137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.219165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.230413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.230440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.241049] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.241076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.251522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.251549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.262815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.262842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.275636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.275664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.285824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.285851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.296700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.296728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.307671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.307698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.318744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.318788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.331549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.331576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.341532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.341560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.352075] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.352103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.362539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.362567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.373346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.373373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.384051] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.384081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.394696] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.394732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.759 [2024-07-13 20:06:11.405744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.759 [2024-07-13 20:06:11.405771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.416989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.417016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.427951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.427978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.438639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.438666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.449551] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.449578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.460145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.460172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.471358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.471385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.482188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.482216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.493012] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.493039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.503849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.503885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.514661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.514689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.525316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.525344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.537863] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.537899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.547945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.547972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.558879] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.558907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.569653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.569681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.580779] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.580807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.591769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.591796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.602443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.602471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.613604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.613631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.626104] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.626131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.636408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.636435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.647517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.647545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.660297] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.660325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.017 [2024-07-13 20:06:11.670652] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.017 [2024-07-13 20:06:11.670680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.682133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.682161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.693100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.693128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.703970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.703998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.714953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.714981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.727537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.727565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.737488] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.737516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.748476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.748503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.761074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.761102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.772342] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.772370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.782291] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.782322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.794367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.794395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.803891] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.803927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 00:18:24.276 Latency(us) 00:18:24.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.276 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:24.276 Nvme1n1 : 5.01 11601.45 90.64 0.00 0.00 11018.67 4708.88 25049.32 00:18:24.276 =================================================================================================================== 00:18:24.276 Total : 11601.45 90.64 0.00 0.00 11018.67 4708.88 25049.32 00:18:24.276 [2024-07-13 20:06:11.808862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.808896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.816885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.816912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.824921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.824956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.832980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.833029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.840991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.841039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.849009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.849057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.857029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.857075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.865065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.865115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.873080] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.276 [2024-07-13 20:06:11.873132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.276 [2024-07-13 20:06:11.881100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.881149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.277 [2024-07-13 20:06:11.889121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.889167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.277 [2024-07-13 20:06:11.897154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.897207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.277 [2024-07-13 20:06:11.905183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.905235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.277 [2024-07-13 20:06:11.913192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.913241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.277 [2024-07-13 20:06:11.921207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.921255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.277 [2024-07-13 20:06:11.929227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.277 [2024-07-13 20:06:11.929273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.937254] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.937297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.945272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.945315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.953256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.953282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.961283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.961313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.969349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.969400] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.977364] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.977411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.985368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.985407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:11.993367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:11.993393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:12.001426] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:12.001480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:12.009453] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:12.009499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:12.017465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:12.017503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:12.025455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:12.025480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:12.033478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:12.033502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 [2024-07-13 20:06:12.041499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.536 [2024-07-13 20:06:12.041524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3190290) - No such process 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3190290 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.536 delay0 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.536 20:06:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:24.536 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.794 [2024-07-13 20:06:12.205063] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:31.353 Initializing NVMe Controllers 00:18:31.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:31.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:31.353 Initialization complete. Launching workers. 00:18:31.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 76 00:18:31.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 33 00:18:31.353 success 148, unsuccess 215, failed 0 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:31.353 rmmod nvme_tcp 00:18:31.353 rmmod nvme_fabrics 00:18:31.353 rmmod nvme_keyring 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3188884 ']' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3188884 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 3188884 ']' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 3188884 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3188884 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3188884' 00:18:31.353 killing process with pid 3188884 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 3188884 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 3188884 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.353 20:06:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.256 20:06:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:33.256 00:18:33.256 real 0m27.769s 00:18:33.256 user 0m40.816s 00:18:33.256 sys 0m8.463s 00:18:33.256 20:06:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.256 20:06:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 ************************************ 00:18:33.256 END TEST nvmf_zcopy 00:18:33.256 ************************************ 00:18:33.256 20:06:20 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:33.256 20:06:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:33.256 20:06:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:33.256 20:06:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.256 ************************************ 00:18:33.256 START TEST nvmf_nmic 00:18:33.256 ************************************ 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:33.256 * Looking for test storage... 00:18:33.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.256 20:06:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:33.257 20:06:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:35.159 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.159 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:35.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:35.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:35.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.160 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:35.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:18:35.417 00:18:35.417 --- 10.0.0.2 ping statistics --- 00:18:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.417 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:18:35.417 00:18:35.417 --- 10.0.0.1 ping statistics --- 00:18:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.417 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3194169 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3194169 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 3194169 ']' 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:35.417 20:06:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.417 [2024-07-13 20:06:23.011755] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:35.417 [2024-07-13 20:06:23.011840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.417 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.675 [2024-07-13 20:06:23.079576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.675 [2024-07-13 20:06:23.166733] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.675 [2024-07-13 20:06:23.166785] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.675 [2024-07-13 20:06:23.166809] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.675 [2024-07-13 20:06:23.166819] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.675 [2024-07-13 20:06:23.166829] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.675 [2024-07-13 20:06:23.166907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.675 [2024-07-13 20:06:23.166965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.675 [2024-07-13 20:06:23.166935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.675 [2024-07-13 20:06:23.166962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.675 [2024-07-13 20:06:23.306427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.675 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 Malloc0 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 [2024-07-13 20:06:23.357952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:35.932 test case1: single bdev can't be used in multiple subsystems 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 [2024-07-13 20:06:23.381809] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:35.932 [2024-07-13 20:06:23.381837] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:35.932 [2024-07-13 20:06:23.381881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.932 request: 00:18:35.932 { 00:18:35.932 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:35.932 "namespace": { 00:18:35.932 "bdev_name": "Malloc0", 00:18:35.932 "no_auto_visible": false 00:18:35.932 }, 00:18:35.932 "method": "nvmf_subsystem_add_ns", 00:18:35.932 "req_id": 1 00:18:35.932 } 00:18:35.932 Got JSON-RPC error response 00:18:35.932 response: 00:18:35.932 { 00:18:35.932 "code": -32602, 00:18:35.932 "message": "Invalid parameters" 00:18:35.932 } 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:35.932 Adding namespace failed - expected result. 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:35.932 test case2: host connect to nvmf target in multiple paths 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:35.932 [2024-07-13 20:06:23.389963] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.932 20:06:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:36.496 20:06:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:37.060 20:06:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:37.060 20:06:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:18:37.060 20:06:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.060 20:06:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:37.060 20:06:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:18:39.585 20:06:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:39.585 [global] 00:18:39.585 thread=1 00:18:39.585 invalidate=1 00:18:39.585 rw=write 00:18:39.585 time_based=1 00:18:39.585 runtime=1 00:18:39.585 ioengine=libaio 00:18:39.585 direct=1 00:18:39.585 bs=4096 00:18:39.585 iodepth=1 00:18:39.585 norandommap=0 00:18:39.585 numjobs=1 00:18:39.585 00:18:39.585 verify_dump=1 00:18:39.585 verify_backlog=512 00:18:39.585 verify_state_save=0 00:18:39.585 do_verify=1 00:18:39.585 verify=crc32c-intel 00:18:39.585 [job0] 00:18:39.585 filename=/dev/nvme0n1 00:18:39.585 Could not set queue depth (nvme0n1) 00:18:39.585 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:39.585 fio-3.35 00:18:39.585 Starting 1 thread 00:18:40.556 00:18:40.556 job0: (groupid=0, jobs=1): err= 0: pid=3194675: Sat Jul 13 20:06:28 2024 00:18:40.556 read: IOPS=1496, BW=5986KiB/s (6130kB/s)(5992KiB/1001msec) 00:18:40.556 slat (nsec): min=5299, max=54012, avg=19615.41, stdev=10559.53 00:18:40.556 clat (usec): min=275, max=557, avg=354.45, stdev=60.26 00:18:40.556 lat (usec): min=282, max=589, avg=374.07, stdev=68.42 00:18:40.556 clat percentiles (usec): 00:18:40.556 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:18:40.556 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 347], 00:18:40.556 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 461], 95.00th=[ 478], 00:18:40.556 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[ 553], 99.95th=[ 562], 00:18:40.556 | 99.99th=[ 562] 00:18:40.556 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:40.556 slat (usec): min=7, max=40618, avg=60.83, stdev=1261.07 00:18:40.556 clat (usec): min=174, max=461, avg=214.74, stdev=35.50 00:18:40.556 lat (usec): min=182, max=40983, avg=275.57, stdev=1266.70 00:18:40.556 clat percentiles (usec): 00:18:40.556 | 1.00th=[ 180], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:18:40.556 | 30.00th=[ 196], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:18:40.556 | 70.00th=[ 217], 80.00th=[ 239], 90.00th=[ 260], 95.00th=[ 289], 00:18:40.556 | 99.00th=[ 338], 99.50th=[ 400], 99.90th=[ 429], 99.95th=[ 461], 00:18:40.556 | 99.99th=[ 461] 00:18:40.556 bw ( KiB/s): min= 7016, max= 7016, per=100.00%, avg=7016.00, stdev= 0.00, samples=1 00:18:40.556 iops : min= 1754, max= 1754, avg=1754.00, stdev= 0.00, samples=1 00:18:40.556 lat (usec) : 250=43.74%, 500=55.11%, 750=1.15% 00:18:40.556 cpu : usr=3.00%, sys=5.60%, ctx=3039, majf=0, minf=2 00:18:40.556 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.556 issued rwts: total=1498,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.556 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.556 00:18:40.556 Run status group 0 (all jobs): 00:18:40.556 READ: bw=5986KiB/s (6130kB/s), 5986KiB/s-5986KiB/s (6130kB/s-6130kB/s), io=5992KiB (6136kB), run=1001-1001msec 00:18:40.556 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:18:40.556 00:18:40.556 Disk stats (read/write): 00:18:40.556 nvme0n1: ios=1219/1536, merge=0/0, ticks=1393/307, in_queue=1700, util=99.70% 00:18:40.556 20:06:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:40.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:40.556 20:06:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:40.556 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:18:40.556 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:40.556 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:40.814 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:40.815 rmmod nvme_tcp 00:18:40.815 rmmod nvme_fabrics 00:18:40.815 rmmod nvme_keyring 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3194169 ']' 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3194169 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 3194169 ']' 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 3194169 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3194169 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3194169' 00:18:40.815 killing process with pid 3194169 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 3194169 00:18:40.815 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 3194169 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:41.075 20:06:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.976 20:06:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.976 00:18:42.976 real 0m9.835s 00:18:42.976 user 0m22.367s 00:18:42.976 sys 0m2.364s 00:18:42.976 20:06:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:42.976 20:06:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:42.976 ************************************ 00:18:42.976 END TEST nvmf_nmic 00:18:42.976 ************************************ 00:18:42.976 20:06:30 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:42.976 20:06:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:42.976 20:06:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:42.976 20:06:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.234 ************************************ 00:18:43.234 START TEST nvmf_fio_target 00:18:43.234 ************************************ 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:43.234 * Looking for test storage... 00:18:43.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:43.234 20:06:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:45.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:45.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.182 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:45.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:45.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:45.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:18:45.183 00:18:45.183 --- 10.0.0.2 ping statistics --- 00:18:45.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.183 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:18:45.183 00:18:45.183 --- 10.0.0.1 ping statistics --- 00:18:45.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.183 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3196869 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3196869 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 3196869 ']' 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:45.183 20:06:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.441 [2024-07-13 20:06:32.864978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:18:45.441 [2024-07-13 20:06:32.865049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.441 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.441 [2024-07-13 20:06:32.928527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.441 [2024-07-13 20:06:33.013428] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.441 [2024-07-13 20:06:33.013476] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.441 [2024-07-13 20:06:33.013499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.441 [2024-07-13 20:06:33.013510] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.441 [2024-07-13 20:06:33.013519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.441 [2024-07-13 20:06:33.013597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.441 [2024-07-13 20:06:33.013660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.441 [2024-07-13 20:06:33.013726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.441 [2024-07-13 20:06:33.013728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.698 20:06:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.955 [2024-07-13 20:06:33.437665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.955 20:06:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.212 20:06:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:46.212 20:06:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.469 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:46.469 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.726 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:46.726 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:46.983 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:46.983 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:47.240 20:06:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.497 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:47.497 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:47.755 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:47.755 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:48.012 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:48.012 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:48.269 20:06:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:48.527 20:06:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:48.527 20:06:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.784 20:06:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:48.785 20:06:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:49.042 20:06:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:49.300 [2024-07-13 20:06:36.830702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.300 20:06:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:49.559 20:06:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:49.816 20:06:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.382 20:06:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:50.382 20:06:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:18:50.382 20:06:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.382 20:06:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:18:50.382 20:06:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:18:50.382 20:06:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:18:52.908 20:06:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:52.908 [global] 00:18:52.908 thread=1 00:18:52.908 invalidate=1 00:18:52.908 rw=write 00:18:52.908 time_based=1 00:18:52.908 runtime=1 00:18:52.908 ioengine=libaio 00:18:52.908 direct=1 00:18:52.908 bs=4096 00:18:52.908 iodepth=1 00:18:52.908 norandommap=0 00:18:52.908 numjobs=1 00:18:52.908 00:18:52.908 verify_dump=1 00:18:52.908 verify_backlog=512 00:18:52.908 verify_state_save=0 00:18:52.908 do_verify=1 00:18:52.908 verify=crc32c-intel 00:18:52.908 [job0] 00:18:52.908 filename=/dev/nvme0n1 00:18:52.908 [job1] 00:18:52.908 filename=/dev/nvme0n2 00:18:52.908 [job2] 00:18:52.908 filename=/dev/nvme0n3 00:18:52.908 [job3] 00:18:52.909 filename=/dev/nvme0n4 00:18:52.909 Could not set queue depth (nvme0n1) 00:18:52.909 Could not set queue depth (nvme0n2) 00:18:52.909 Could not set queue depth (nvme0n3) 00:18:52.909 Could not set queue depth (nvme0n4) 00:18:52.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.909 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.909 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.909 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.909 fio-3.35 00:18:52.909 Starting 4 threads 00:18:53.842 00:18:53.842 job0: (groupid=0, jobs=1): err= 0: pid=3197824: Sat Jul 13 20:06:41 2024 00:18:53.842 read: IOPS=20, BW=81.0KiB/s (82.9kB/s)(84.0KiB/1037msec) 00:18:53.842 slat (nsec): min=13304, max=35968, avg=18924.14, stdev=7397.59 00:18:53.842 clat (usec): min=40706, max=42242, avg=41714.97, stdev=471.49 00:18:53.842 lat (usec): min=40719, max=42259, avg=41733.89, stdev=473.46 00:18:53.842 clat percentiles (usec): 00:18:53.842 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:53.842 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:53.842 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:53.842 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:53.842 | 99.99th=[42206] 00:18:53.842 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:18:53.842 slat (nsec): min=9690, max=72851, avg=20831.63, stdev=9630.32 00:18:53.842 clat (usec): min=205, max=457, avg=287.26, stdev=48.59 00:18:53.842 lat (usec): min=217, max=498, avg=308.09, stdev=51.64 00:18:53.842 clat percentiles (usec): 00:18:53.842 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 249], 00:18:53.842 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:18:53.842 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 400], 00:18:53.842 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 457], 99.95th=[ 457], 00:18:53.842 | 99.99th=[ 457] 00:18:53.842 bw ( KiB/s): min= 4096, max= 4096, per=29.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.842 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.842 lat (usec) : 250=21.01%, 500=75.05% 00:18:53.842 lat (msec) : 50=3.94% 00:18:53.842 cpu : usr=0.29%, sys=1.74%, ctx=535, majf=0, minf=2 00:18:53.842 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.842 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.842 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.842 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.842 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.842 job1: (groupid=0, jobs=1): err= 0: pid=3197825: Sat Jul 13 20:06:41 2024 00:18:53.842 read: IOPS=505, BW=2021KiB/s (2070kB/s)(2092KiB/1035msec) 00:18:53.842 slat (nsec): min=6075, max=69502, avg=17497.69, stdev=7182.55 00:18:53.842 clat (usec): min=355, max=41448, avg=1259.77, stdev=5554.87 00:18:53.842 lat (usec): min=361, max=41455, avg=1277.27, stdev=5554.94 00:18:53.842 clat percentiles (usec): 00:18:53.842 | 1.00th=[ 371], 5.00th=[ 396], 10.00th=[ 404], 20.00th=[ 424], 00:18:53.842 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 498], 00:18:53.842 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 619], 00:18:53.842 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:18:53.842 | 99.99th=[41681] 00:18:53.842 write: IOPS=989, BW=3957KiB/s (4052kB/s)(4096KiB/1035msec); 0 zone resets 00:18:53.842 slat (nsec): min=5629, max=77322, avg=15566.53, stdev=9508.85 00:18:53.842 clat (usec): min=187, max=584, avg=335.90, stdev=90.22 00:18:53.842 lat (usec): min=195, max=621, avg=351.47, stdev=94.69 00:18:53.842 clat percentiles (usec): 00:18:53.842 | 1.00th=[ 194], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 237], 00:18:53.842 | 30.00th=[ 281], 40.00th=[ 310], 50.00th=[ 334], 60.00th=[ 367], 00:18:53.842 | 70.00th=[ 392], 80.00th=[ 404], 90.00th=[ 449], 95.00th=[ 510], 00:18:53.842 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 586], 99.95th=[ 586], 00:18:53.842 | 99.99th=[ 586] 00:18:53.842 bw ( KiB/s): min= 2656, max= 5536, per=29.63%, avg=4096.00, stdev=2036.47, samples=2 00:18:53.842 iops : min= 664, max= 1384, avg=1024.00, stdev=509.12, samples=2 00:18:53.842 lat (usec) : 250=16.42%, 500=67.03%, 750=15.90% 00:18:53.843 lat (msec) : 50=0.65% 00:18:53.843 cpu : usr=1.55%, sys=2.32%, ctx=1547, majf=0, minf=1 00:18:53.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.843 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.843 job2: (groupid=0, jobs=1): err= 0: pid=3197826: Sat Jul 13 20:06:41 2024 00:18:53.843 read: IOPS=1381, BW=5526KiB/s (5659kB/s)(5532KiB/1001msec) 00:18:53.843 slat (nsec): min=4469, max=65615, avg=13922.16, stdev=8086.17 00:18:53.843 clat (usec): min=276, max=849, avg=389.91, stdev=95.88 00:18:53.843 lat (usec): min=282, max=863, avg=403.83, stdev=97.60 00:18:53.843 clat percentiles (usec): 00:18:53.843 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 318], 00:18:53.843 | 30.00th=[ 334], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 379], 00:18:53.843 | 70.00th=[ 396], 80.00th=[ 433], 90.00th=[ 478], 95.00th=[ 635], 00:18:53.843 | 99.00th=[ 750], 99.50th=[ 775], 99.90th=[ 840], 99.95th=[ 848], 00:18:53.843 | 99.99th=[ 848] 00:18:53.843 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:53.843 slat (nsec): min=6047, max=74749, avg=13543.91, stdev=10063.79 00:18:53.843 clat (usec): min=181, max=588, avg=266.73, stdev=74.03 00:18:53.843 lat (usec): min=188, max=604, avg=280.27, stdev=78.49 00:18:53.843 clat percentiles (usec): 00:18:53.843 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:18:53.843 | 30.00th=[ 204], 40.00th=[ 221], 50.00th=[ 241], 60.00th=[ 277], 00:18:53.843 | 70.00th=[ 322], 80.00th=[ 338], 90.00th=[ 375], 95.00th=[ 392], 00:18:53.843 | 99.00th=[ 453], 99.50th=[ 478], 99.90th=[ 578], 99.95th=[ 586], 00:18:53.843 | 99.99th=[ 586] 00:18:53.843 bw ( KiB/s): min= 7216, max= 7216, per=52.20%, avg=7216.00, stdev= 0.00, samples=1 00:18:53.843 iops : min= 1804, max= 1804, avg=1804.00, stdev= 0.00, samples=1 00:18:53.843 lat (usec) : 250=28.13%, 500=67.25%, 750=4.11%, 1000=0.51% 00:18:53.843 cpu : usr=2.10%, sys=4.40%, ctx=2920, majf=0, minf=1 00:18:53.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.843 issued rwts: total=1383,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.843 job3: (groupid=0, jobs=1): err= 0: pid=3197827: Sat Jul 13 20:06:41 2024 00:18:53.843 read: IOPS=20, BW=81.1KiB/s (83.0kB/s)(84.0KiB/1036msec) 00:18:53.843 slat (nsec): min=10651, max=34582, avg=15288.38, stdev=4931.02 00:18:53.843 clat (usec): min=40794, max=42956, avg=41735.85, stdev=555.24 00:18:53.843 lat (usec): min=40805, max=42976, avg=41751.14, stdev=555.76 00:18:53.843 clat percentiles (usec): 00:18:53.843 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:53.843 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:18:53.843 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:53.843 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:53.843 | 99.99th=[42730] 00:18:53.843 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:18:53.843 slat (nsec): min=8992, max=66904, avg=18740.84, stdev=8963.13 00:18:53.843 clat (usec): min=200, max=478, avg=285.55, stdev=45.80 00:18:53.843 lat (usec): min=212, max=500, avg=304.29, stdev=49.60 00:18:53.843 clat percentiles (usec): 00:18:53.843 | 1.00th=[ 217], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 247], 00:18:53.843 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 289], 00:18:53.843 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 347], 95.00th=[ 375], 00:18:53.843 | 99.00th=[ 449], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 478], 00:18:53.843 | 99.99th=[ 478] 00:18:53.843 bw ( KiB/s): min= 4096, max= 4096, per=29.63%, avg=4096.00, stdev= 0.00, samples=1 00:18:53.843 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:53.843 lat (usec) : 250=21.39%, 500=74.67% 00:18:53.843 lat (msec) : 50=3.94% 00:18:53.843 cpu : usr=0.87%, sys=0.97%, ctx=534, majf=0, minf=1 00:18:53.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:53.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.843 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:53.843 00:18:53.843 Run status group 0 (all jobs): 00:18:53.843 READ: bw=7514KiB/s (7694kB/s), 81.0KiB/s-5526KiB/s (82.9kB/s-5659kB/s), io=7792KiB (7979kB), run=1001-1037msec 00:18:53.843 WRITE: bw=13.5MiB/s (14.2MB/s), 1975KiB/s-6138KiB/s (2022kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1037msec 00:18:53.843 00:18:53.843 Disk stats (read/write): 00:18:53.843 nvme0n1: ios=68/512, merge=0/0, ticks=1070/144, in_queue=1214, util=97.39% 00:18:53.843 nvme0n2: ios=550/1024, merge=0/0, ticks=599/324, in_queue=923, util=95.31% 00:18:53.843 nvme0n3: ios=1049/1424, merge=0/0, ticks=1402/354, in_queue=1756, util=97.69% 00:18:53.843 nvme0n4: ios=16/512, merge=0/0, ticks=667/136, in_queue=803, util=89.62% 00:18:53.843 20:06:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:54.101 [global] 00:18:54.101 thread=1 00:18:54.101 invalidate=1 00:18:54.101 rw=randwrite 00:18:54.101 time_based=1 00:18:54.101 runtime=1 00:18:54.101 ioengine=libaio 00:18:54.101 direct=1 00:18:54.101 bs=4096 00:18:54.101 iodepth=1 00:18:54.101 norandommap=0 00:18:54.101 numjobs=1 00:18:54.101 00:18:54.101 verify_dump=1 00:18:54.101 verify_backlog=512 00:18:54.101 verify_state_save=0 00:18:54.101 do_verify=1 00:18:54.101 verify=crc32c-intel 00:18:54.101 [job0] 00:18:54.101 filename=/dev/nvme0n1 00:18:54.101 [job1] 00:18:54.101 filename=/dev/nvme0n2 00:18:54.101 [job2] 00:18:54.101 filename=/dev/nvme0n3 00:18:54.101 [job3] 00:18:54.101 filename=/dev/nvme0n4 00:18:54.101 Could not set queue depth (nvme0n1) 00:18:54.101 Could not set queue depth (nvme0n2) 00:18:54.101 Could not set queue depth (nvme0n3) 00:18:54.101 Could not set queue depth (nvme0n4) 00:18:54.101 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.101 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.101 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.101 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:54.101 fio-3.35 00:18:54.101 Starting 4 threads 00:18:55.474 00:18:55.474 job0: (groupid=0, jobs=1): err= 0: pid=3198116: Sat Jul 13 20:06:42 2024 00:18:55.474 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:55.474 slat (nsec): min=6769, max=50661, avg=16264.43, stdev=7434.65 00:18:55.474 clat (usec): min=429, max=888, avg=557.55, stdev=52.72 00:18:55.474 lat (usec): min=444, max=907, avg=573.82, stdev=54.06 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 461], 5.00th=[ 474], 10.00th=[ 486], 20.00th=[ 515], 00:18:55.474 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 570], 00:18:55.474 | 70.00th=[ 578], 80.00th=[ 594], 90.00th=[ 619], 95.00th=[ 635], 00:18:55.474 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 873], 99.95th=[ 889], 00:18:55.474 | 99.99th=[ 889] 00:18:55.474 write: IOPS=1100, BW=4404KiB/s (4509kB/s)(4408KiB/1001msec); 0 zone resets 00:18:55.474 slat (nsec): min=7396, max=75511, avg=19260.08, stdev=10187.22 00:18:55.474 clat (usec): min=188, max=2003, avg=342.33, stdev=120.32 00:18:55.474 lat (usec): min=199, max=2020, avg=361.59, stdev=122.72 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 198], 5.00th=[ 215], 10.00th=[ 235], 20.00th=[ 262], 00:18:55.474 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 343], 00:18:55.474 | 70.00th=[ 379], 80.00th=[ 416], 90.00th=[ 453], 95.00th=[ 486], 00:18:55.474 | 99.00th=[ 840], 99.50th=[ 1004], 99.90th=[ 1254], 99.95th=[ 2008], 00:18:55.474 | 99.99th=[ 2008] 00:18:55.474 bw ( KiB/s): min= 4096, max= 4096, per=32.70%, avg=4096.00, stdev= 0.00, samples=1 00:18:55.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:55.474 lat (usec) : 250=7.90%, 500=48.31%, 750=42.85%, 1000=0.61% 00:18:55.474 lat (msec) : 2=0.28%, 4=0.05% 00:18:55.474 cpu : usr=3.10%, sys=4.60%, ctx=2127, majf=0, minf=1 00:18:55.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.474 issued rwts: total=1024,1102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.474 job1: (groupid=0, jobs=1): err= 0: pid=3198131: Sat Jul 13 20:06:42 2024 00:18:55.474 read: IOPS=23, BW=92.3KiB/s (94.5kB/s)(96.0KiB/1040msec) 00:18:55.474 slat (nsec): min=8114, max=33592, avg=18032.17, stdev=6251.97 00:18:55.474 clat (usec): min=446, max=42019, avg=37772.40, stdev=11489.79 00:18:55.474 lat (usec): min=462, max=42035, avg=37790.43, stdev=11489.43 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 445], 5.00th=[ 537], 10.00th=[40633], 20.00th=[41157], 00:18:55.474 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:55.474 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:18:55.474 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:55.474 | 99.99th=[42206] 00:18:55.474 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:18:55.474 slat (nsec): min=6201, max=73277, avg=12123.40, stdev=7681.67 00:18:55.474 clat (usec): min=178, max=511, avg=244.02, stdev=52.96 00:18:55.474 lat (usec): min=187, max=529, avg=256.14, stdev=56.43 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 210], 00:18:55.474 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 237], 00:18:55.474 | 70.00th=[ 247], 80.00th=[ 260], 90.00th=[ 314], 95.00th=[ 371], 00:18:55.474 | 99.00th=[ 424], 99.50th=[ 494], 99.90th=[ 510], 99.95th=[ 510], 00:18:55.474 | 99.99th=[ 510] 00:18:55.474 bw ( KiB/s): min= 4096, max= 4096, per=32.70%, avg=4096.00, stdev= 0.00, samples=1 00:18:55.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:55.474 lat (usec) : 250=71.83%, 500=23.69%, 750=0.37% 00:18:55.474 lat (msec) : 50=4.10% 00:18:55.474 cpu : usr=0.38%, sys=0.58%, ctx=538, majf=0, minf=1 00:18:55.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.474 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.474 job2: (groupid=0, jobs=1): err= 0: pid=3198170: Sat Jul 13 20:06:42 2024 00:18:55.474 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:18:55.474 slat (nsec): min=15755, max=36885, avg=19455.32, stdev=6621.81 00:18:55.474 clat (usec): min=456, max=41372, avg=39126.21, stdev=8637.97 00:18:55.474 lat (usec): min=475, max=41388, avg=39145.67, stdev=8638.11 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 457], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:55.474 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:55.474 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:55.474 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:55.474 | 99.99th=[41157] 00:18:55.474 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:18:55.474 slat (nsec): min=6531, max=42255, avg=13261.48, stdev=6897.97 00:18:55.474 clat (usec): min=198, max=1221, avg=262.68, stdev=68.19 00:18:55.474 lat (usec): min=206, max=1229, avg=275.94, stdev=68.78 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:18:55.474 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 255], 00:18:55.474 | 70.00th=[ 265], 80.00th=[ 281], 90.00th=[ 310], 95.00th=[ 347], 00:18:55.474 | 99.00th=[ 457], 99.50th=[ 725], 99.90th=[ 1221], 99.95th=[ 1221], 00:18:55.474 | 99.99th=[ 1221] 00:18:55.474 bw ( KiB/s): min= 4096, max= 4096, per=32.70%, avg=4096.00, stdev= 0.00, samples=1 00:18:55.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:55.474 lat (usec) : 250=50.00%, 500=45.32%, 750=0.37%, 1000=0.19% 00:18:55.474 lat (msec) : 2=0.19%, 50=3.93% 00:18:55.474 cpu : usr=0.60%, sys=0.40%, ctx=536, majf=0, minf=1 00:18:55.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.474 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.474 job3: (groupid=0, jobs=1): err= 0: pid=3198172: Sat Jul 13 20:06:42 2024 00:18:55.474 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:55.474 slat (nsec): min=6431, max=79094, avg=22518.48, stdev=10349.79 00:18:55.474 clat (usec): min=426, max=837, avg=556.01, stdev=55.30 00:18:55.474 lat (usec): min=441, max=853, avg=578.53, stdev=57.13 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 453], 5.00th=[ 478], 10.00th=[ 490], 20.00th=[ 515], 00:18:55.474 | 30.00th=[ 529], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:18:55.474 | 70.00th=[ 578], 80.00th=[ 586], 90.00th=[ 611], 95.00th=[ 644], 00:18:55.474 | 99.00th=[ 766], 99.50th=[ 816], 99.90th=[ 840], 99.95th=[ 840], 00:18:55.474 | 99.99th=[ 840] 00:18:55.474 write: IOPS=1129, BW=4519KiB/s (4628kB/s)(4524KiB/1001msec); 0 zone resets 00:18:55.474 slat (nsec): min=6318, max=74008, avg=17046.79, stdev=9683.43 00:18:55.474 clat (usec): min=204, max=542, avg=328.66, stdev=72.87 00:18:55.474 lat (usec): min=215, max=566, avg=345.71, stdev=76.21 00:18:55.474 clat percentiles (usec): 00:18:55.474 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 249], 00:18:55.474 | 30.00th=[ 277], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 351], 00:18:55.474 | 70.00th=[ 371], 80.00th=[ 396], 90.00th=[ 420], 95.00th=[ 453], 00:18:55.474 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 545], 00:18:55.474 | 99.99th=[ 545] 00:18:55.474 bw ( KiB/s): min= 4496, max= 4496, per=35.89%, avg=4496.00, stdev= 0.00, samples=1 00:18:55.474 iops : min= 1124, max= 1124, avg=1124.00, stdev= 0.00, samples=1 00:18:55.475 lat (usec) : 250=10.95%, 500=47.61%, 750=40.74%, 1000=0.70% 00:18:55.475 cpu : usr=2.20%, sys=4.60%, ctx=2157, majf=0, minf=2 00:18:55.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:55.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.475 issued rwts: total=1024,1131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:55.475 00:18:55.475 Run status group 0 (all jobs): 00:18:55.475 READ: bw=8054KiB/s (8247kB/s), 87.3KiB/s-4092KiB/s (89.4kB/s-4190kB/s), io=8376KiB (8577kB), run=1001-1040msec 00:18:55.475 WRITE: bw=12.2MiB/s (12.8MB/s), 1969KiB/s-4519KiB/s (2016kB/s-4628kB/s), io=12.7MiB (13.3MB), run=1001-1040msec 00:18:55.475 00:18:55.475 Disk stats (read/write): 00:18:55.475 nvme0n1: ios=812/1024, merge=0/0, ticks=1380/334, in_queue=1714, util=93.19% 00:18:55.475 nvme0n2: ios=42/512, merge=0/0, ticks=1687/122, in_queue=1809, util=98.17% 00:18:55.475 nvme0n3: ios=76/512, merge=0/0, ticks=957/135, in_queue=1092, util=97.17% 00:18:55.475 nvme0n4: ios=864/1024, merge=0/0, ticks=694/309, in_queue=1003, util=97.46% 00:18:55.475 20:06:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:55.475 [global] 00:18:55.475 thread=1 00:18:55.475 invalidate=1 00:18:55.475 rw=write 00:18:55.475 time_based=1 00:18:55.475 runtime=1 00:18:55.475 ioengine=libaio 00:18:55.475 direct=1 00:18:55.475 bs=4096 00:18:55.475 iodepth=128 00:18:55.475 norandommap=0 00:18:55.475 numjobs=1 00:18:55.475 00:18:55.475 verify_dump=1 00:18:55.475 verify_backlog=512 00:18:55.475 verify_state_save=0 00:18:55.475 do_verify=1 00:18:55.475 verify=crc32c-intel 00:18:55.475 [job0] 00:18:55.475 filename=/dev/nvme0n1 00:18:55.475 [job1] 00:18:55.475 filename=/dev/nvme0n2 00:18:55.475 [job2] 00:18:55.475 filename=/dev/nvme0n3 00:18:55.475 [job3] 00:18:55.475 filename=/dev/nvme0n4 00:18:55.475 Could not set queue depth (nvme0n1) 00:18:55.475 Could not set queue depth (nvme0n2) 00:18:55.475 Could not set queue depth (nvme0n3) 00:18:55.475 Could not set queue depth (nvme0n4) 00:18:55.732 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.732 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.732 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.732 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:55.732 fio-3.35 00:18:55.732 Starting 4 threads 00:18:57.166 00:18:57.166 job0: (groupid=0, jobs=1): err= 0: pid=3198409: Sat Jul 13 20:06:44 2024 00:18:57.166 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:18:57.166 slat (usec): min=2, max=34888, avg=163.97, stdev=1414.69 00:18:57.166 clat (usec): min=3526, max=56695, avg=20213.96, stdev=8784.63 00:18:57.166 lat (usec): min=4576, max=56738, avg=20377.93, stdev=8862.25 00:18:57.166 clat percentiles (usec): 00:18:57.166 | 1.00th=[ 7439], 5.00th=[11469], 10.00th=[12518], 20.00th=[13566], 00:18:57.166 | 30.00th=[14091], 40.00th=[15139], 50.00th=[18220], 60.00th=[19792], 00:18:57.166 | 70.00th=[21890], 80.00th=[26346], 90.00th=[33424], 95.00th=[38536], 00:18:57.166 | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[51119], 00:18:57.166 | 99.99th=[56886] 00:18:57.166 write: IOPS=3408, BW=13.3MiB/s (14.0MB/s)(13.5MiB/1011msec); 0 zone resets 00:18:57.166 slat (usec): min=4, max=32409, avg=134.15, stdev=910.36 00:18:57.166 clat (usec): min=2946, max=64665, avg=19192.35, stdev=11682.47 00:18:57.166 lat (usec): min=2954, max=64689, avg=19326.50, stdev=11756.80 00:18:57.166 clat percentiles (usec): 00:18:57.166 | 1.00th=[ 3720], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[10814], 00:18:57.166 | 30.00th=[13173], 40.00th=[14091], 50.00th=[14615], 60.00th=[18220], 00:18:57.166 | 70.00th=[21627], 80.00th=[25035], 90.00th=[35914], 95.00th=[47973], 00:18:57.166 | 99.00th=[60556], 99.50th=[63177], 99.90th=[64750], 99.95th=[64750], 00:18:57.166 | 99.99th=[64750] 00:18:57.166 bw ( KiB/s): min=12592, max=13952, per=21.58%, avg=13272.00, stdev=961.67, samples=2 00:18:57.166 iops : min= 3148, max= 3488, avg=3318.00, stdev=240.42, samples=2 00:18:57.166 lat (msec) : 4=0.64%, 10=8.58%, 20=54.48%, 50=33.69%, 100=2.61% 00:18:57.166 cpu : usr=4.75%, sys=6.73%, ctx=369, majf=0, minf=1 00:18:57.166 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:57.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.166 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.166 job1: (groupid=0, jobs=1): err= 0: pid=3198410: Sat Jul 13 20:06:44 2024 00:18:57.166 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:18:57.166 slat (usec): min=3, max=18126, avg=99.84, stdev=745.80 00:18:57.166 clat (usec): min=4502, max=30975, avg=13160.17, stdev=4074.89 00:18:57.166 lat (usec): min=4508, max=30985, avg=13260.01, stdev=4126.79 00:18:57.166 clat percentiles (usec): 00:18:57.166 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10552], 00:18:57.166 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11600], 60.00th=[12256], 00:18:57.166 | 70.00th=[13173], 80.00th=[15926], 90.00th=[18744], 95.00th=[22938], 00:18:57.166 | 99.00th=[28443], 99.50th=[28443], 99.90th=[31065], 99.95th=[31065], 00:18:57.167 | 99.99th=[31065] 00:18:57.167 write: IOPS=4520, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1010msec); 0 zone resets 00:18:57.167 slat (usec): min=3, max=15008, avg=117.01, stdev=746.42 00:18:57.167 clat (usec): min=1662, max=56552, avg=16273.66, stdev=10587.50 00:18:57.167 lat (usec): min=1669, max=56560, avg=16390.67, stdev=10663.49 00:18:57.167 clat percentiles (usec): 00:18:57.167 | 1.00th=[ 3752], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9241], 00:18:57.167 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11863], 60.00th=[12649], 00:18:57.167 | 70.00th=[18220], 80.00th=[23725], 90.00th=[31327], 95.00th=[42206], 00:18:57.167 | 99.00th=[51119], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:18:57.167 | 99.99th=[56361] 00:18:57.167 bw ( KiB/s): min=15032, max=20521, per=28.90%, avg=17776.50, stdev=3881.31, samples=2 00:18:57.167 iops : min= 3758, max= 5130, avg=4444.00, stdev=970.15, samples=2 00:18:57.167 lat (msec) : 2=0.09%, 4=0.52%, 10=24.00%, 20=59.41%, 50=15.18% 00:18:57.167 lat (msec) : 100=0.80% 00:18:57.167 cpu : usr=5.65%, sys=8.42%, ctx=357, majf=0, minf=1 00:18:57.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:57.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.167 issued rwts: total=4096,4566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.167 job2: (groupid=0, jobs=1): err= 0: pid=3198411: Sat Jul 13 20:06:44 2024 00:18:57.167 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:18:57.167 slat (usec): min=3, max=23323, avg=100.40, stdev=641.00 00:18:57.167 clat (usec): min=3867, max=41905, avg=13494.50, stdev=4159.91 00:18:57.167 lat (usec): min=7711, max=41920, avg=13594.89, stdev=4193.29 00:18:57.167 clat percentiles (usec): 00:18:57.167 | 1.00th=[ 9241], 5.00th=[10552], 10.00th=[10945], 20.00th=[11731], 00:18:57.167 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12780], 00:18:57.167 | 70.00th=[13304], 80.00th=[13960], 90.00th=[16581], 95.00th=[19006], 00:18:57.167 | 99.00th=[34341], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:18:57.167 | 99.99th=[41681] 00:18:57.167 write: IOPS=4962, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1002msec); 0 zone resets 00:18:57.167 slat (usec): min=4, max=25912, avg=97.53, stdev=678.64 00:18:57.167 clat (usec): min=1436, max=30778, avg=12322.67, stdev=3205.32 00:18:57.167 lat (usec): min=5716, max=33219, avg=12420.20, stdev=3251.75 00:18:57.167 clat percentiles (usec): 00:18:57.167 | 1.00th=[ 6587], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10552], 00:18:57.167 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:18:57.167 | 70.00th=[12256], 80.00th=[14091], 90.00th=[15795], 95.00th=[20579], 00:18:57.167 | 99.00th=[22152], 99.50th=[26870], 99.90th=[29230], 99.95th=[29754], 00:18:57.167 | 99.99th=[30802] 00:18:57.167 bw ( KiB/s): min=18280, max=20480, per=31.51%, avg=19380.00, stdev=1555.63, samples=2 00:18:57.167 iops : min= 4570, max= 5120, avg=4845.00, stdev=388.91, samples=2 00:18:57.167 lat (msec) : 2=0.01%, 4=0.01%, 10=5.72%, 20=89.10%, 50=5.16% 00:18:57.167 cpu : usr=6.69%, sys=10.79%, ctx=334, majf=0, minf=1 00:18:57.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:57.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.167 issued rwts: total=4608,4972,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.167 job3: (groupid=0, jobs=1): err= 0: pid=3198412: Sat Jul 13 20:06:44 2024 00:18:57.167 read: IOPS=2331, BW=9325KiB/s (9549kB/s)(9372KiB/1005msec) 00:18:57.167 slat (usec): min=2, max=48665, avg=244.29, stdev=2186.30 00:18:57.167 clat (msec): min=2, max=183, avg=30.09, stdev=27.44 00:18:57.167 lat (msec): min=6, max=183, avg=30.34, stdev=27.63 00:18:57.167 clat percentiles (msec): 00:18:57.167 | 1.00th=[ 7], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 16], 00:18:57.167 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 21], 00:18:57.167 | 70.00th=[ 29], 80.00th=[ 39], 90.00th=[ 48], 95.00th=[ 103], 00:18:57.167 | 99.00th=[ 138], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:18:57.167 | 99.99th=[ 184] 00:18:57.167 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:18:57.167 slat (usec): min=3, max=51910, avg=153.91, stdev=1391.96 00:18:57.167 clat (usec): min=5008, max=94522, avg=22125.91, stdev=16670.03 00:18:57.167 lat (usec): min=5016, max=94529, avg=22279.83, stdev=16730.83 00:18:57.167 clat percentiles (usec): 00:18:57.167 | 1.00th=[ 7898], 5.00th=[10028], 10.00th=[12518], 20.00th=[12911], 00:18:57.167 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14484], 60.00th=[15795], 00:18:57.167 | 70.00th=[20055], 80.00th=[29230], 90.00th=[43779], 95.00th=[65799], 00:18:57.167 | 99.00th=[81265], 99.50th=[89654], 99.90th=[94897], 99.95th=[94897], 00:18:57.167 | 99.99th=[94897] 00:18:57.167 bw ( KiB/s): min= 8192, max=12288, per=16.65%, avg=10240.00, stdev=2896.31, samples=2 00:18:57.167 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:18:57.167 lat (msec) : 4=0.02%, 10=3.55%, 20=60.78%, 50=27.07%, 100=5.98% 00:18:57.167 lat (msec) : 250=2.61% 00:18:57.167 cpu : usr=3.19%, sys=3.98%, ctx=209, majf=0, minf=1 00:18:57.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:57.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.167 issued rwts: total=2343,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.167 00:18:57.167 Run status group 0 (all jobs): 00:18:57.167 READ: bw=54.6MiB/s (57.2MB/s), 9325KiB/s-18.0MiB/s (9549kB/s-18.8MB/s), io=55.2MiB (57.8MB), run=1002-1011msec 00:18:57.167 WRITE: bw=60.1MiB/s (63.0MB/s), 9.95MiB/s-19.4MiB/s (10.4MB/s-20.3MB/s), io=60.7MiB (63.7MB), run=1002-1011msec 00:18:57.167 00:18:57.167 Disk stats (read/write): 00:18:57.167 nvme0n1: ios=2583/2831, merge=0/0, ticks=53216/50029, in_queue=103245, util=89.88% 00:18:57.167 nvme0n2: ios=3610/3991, merge=0/0, ticks=32557/43324, in_queue=75881, util=95.53% 00:18:57.167 nvme0n3: ios=4121/4280, merge=0/0, ticks=26050/23180, in_queue=49230, util=95.62% 00:18:57.167 nvme0n4: ios=2105/2079, merge=0/0, ticks=28515/19326, in_queue=47841, util=95.48% 00:18:57.167 20:06:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:57.167 [global] 00:18:57.167 thread=1 00:18:57.167 invalidate=1 00:18:57.167 rw=randwrite 00:18:57.167 time_based=1 00:18:57.167 runtime=1 00:18:57.167 ioengine=libaio 00:18:57.167 direct=1 00:18:57.167 bs=4096 00:18:57.167 iodepth=128 00:18:57.167 norandommap=0 00:18:57.167 numjobs=1 00:18:57.167 00:18:57.167 verify_dump=1 00:18:57.167 verify_backlog=512 00:18:57.167 verify_state_save=0 00:18:57.167 do_verify=1 00:18:57.167 verify=crc32c-intel 00:18:57.167 [job0] 00:18:57.167 filename=/dev/nvme0n1 00:18:57.167 [job1] 00:18:57.167 filename=/dev/nvme0n2 00:18:57.167 [job2] 00:18:57.167 filename=/dev/nvme0n3 00:18:57.167 [job3] 00:18:57.167 filename=/dev/nvme0n4 00:18:57.167 Could not set queue depth (nvme0n1) 00:18:57.167 Could not set queue depth (nvme0n2) 00:18:57.167 Could not set queue depth (nvme0n3) 00:18:57.167 Could not set queue depth (nvme0n4) 00:18:57.167 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:57.167 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:57.167 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:57.167 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:57.167 fio-3.35 00:18:57.167 Starting 4 threads 00:18:58.542 00:18:58.542 job0: (groupid=0, jobs=1): err= 0: pid=3198636: Sat Jul 13 20:06:45 2024 00:18:58.542 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:18:58.542 slat (usec): min=2, max=20397, avg=174.81, stdev=1045.79 00:18:58.542 clat (usec): min=8079, max=58912, avg=24039.83, stdev=12102.54 00:18:58.542 lat (usec): min=8277, max=60423, avg=24214.64, stdev=12127.48 00:18:58.542 clat percentiles (usec): 00:18:58.542 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[13566], 20.00th=[15401], 00:18:58.542 | 30.00th=[16057], 40.00th=[17433], 50.00th=[19268], 60.00th=[20841], 00:18:58.542 | 70.00th=[24511], 80.00th=[36439], 90.00th=[44827], 95.00th=[50594], 00:18:58.542 | 99.00th=[56886], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:18:58.542 | 99.99th=[58983] 00:18:58.542 write: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1003msec); 0 zone resets 00:18:58.542 slat (usec): min=3, max=40980, avg=175.57, stdev=1435.01 00:18:58.542 clat (usec): min=1106, max=60136, avg=21588.74, stdev=12977.21 00:18:58.542 lat (usec): min=4588, max=60145, avg=21764.31, stdev=13054.71 00:18:58.542 clat percentiles (usec): 00:18:58.542 | 1.00th=[ 4817], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[12125], 00:18:58.542 | 30.00th=[13698], 40.00th=[14877], 50.00th=[16712], 60.00th=[20317], 00:18:58.542 | 70.00th=[22676], 80.00th=[28181], 90.00th=[43779], 95.00th=[53740], 00:18:58.542 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:18:58.542 | 99.99th=[60031] 00:18:58.542 bw ( KiB/s): min=11432, max=11712, per=18.63%, avg=11572.00, stdev=197.99, samples=2 00:18:58.542 iops : min= 2858, max= 2928, avg=2893.00, stdev=49.50, samples=2 00:18:58.542 lat (msec) : 2=0.02%, 10=6.56%, 20=49.78%, 50=38.00%, 100=5.64% 00:18:58.542 cpu : usr=3.49%, sys=3.49%, ctx=234, majf=0, minf=1 00:18:58.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:58.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.543 issued rwts: total=2560,3021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.543 job1: (groupid=0, jobs=1): err= 0: pid=3198637: Sat Jul 13 20:06:45 2024 00:18:58.543 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:18:58.543 slat (usec): min=2, max=10282, avg=104.31, stdev=588.16 00:18:58.543 clat (usec): min=3215, max=34061, avg=13693.55, stdev=4779.41 00:18:58.543 lat (usec): min=3227, max=34098, avg=13797.86, stdev=4805.09 00:18:58.543 clat percentiles (usec): 00:18:58.543 | 1.00th=[ 6063], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10552], 00:18:58.543 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12256], 60.00th=[13042], 00:18:58.543 | 70.00th=[13960], 80.00th=[15270], 90.00th=[21890], 95.00th=[25297], 00:18:58.543 | 99.00th=[30802], 99.50th=[31327], 99.90th=[32375], 99.95th=[32375], 00:18:58.543 | 99.99th=[33817] 00:18:58.543 write: IOPS=5339, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1003msec); 0 zone resets 00:18:58.543 slat (usec): min=3, max=11720, avg=74.14, stdev=380.16 00:18:58.543 clat (usec): min=259, max=25496, avg=10668.07, stdev=2720.53 00:18:58.543 lat (usec): min=3390, max=25501, avg=10742.21, stdev=2718.98 00:18:58.543 clat percentiles (usec): 00:18:58.543 | 1.00th=[ 4883], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[ 9241], 00:18:58.543 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:18:58.543 | 70.00th=[11076], 80.00th=[11863], 90.00th=[12780], 95.00th=[15139], 00:18:58.543 | 99.00th=[23200], 99.50th=[23462], 99.90th=[25297], 99.95th=[25297], 00:18:58.543 | 99.99th=[25560] 00:18:58.543 bw ( KiB/s): min=20521, max=21344, per=33.70%, avg=20932.50, stdev=581.95, samples=2 00:18:58.543 iops : min= 5130, max= 5336, avg=5233.00, stdev=145.66, samples=2 00:18:58.543 lat (usec) : 500=0.01% 00:18:58.543 lat (msec) : 4=0.55%, 10=28.37%, 20=64.47%, 50=6.60% 00:18:58.543 cpu : usr=6.59%, sys=11.08%, ctx=591, majf=0, minf=1 00:18:58.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:58.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.543 issued rwts: total=5120,5356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.543 job2: (groupid=0, jobs=1): err= 0: pid=3198638: Sat Jul 13 20:06:45 2024 00:18:58.543 read: IOPS=3417, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1005msec) 00:18:58.543 slat (usec): min=2, max=31517, avg=154.68, stdev=1072.30 00:18:58.543 clat (usec): min=851, max=48847, avg=18437.55, stdev=8680.38 00:18:58.543 lat (usec): min=4962, max=48863, avg=18592.23, stdev=8728.56 00:18:58.543 clat percentiles (usec): 00:18:58.543 | 1.00th=[ 5997], 5.00th=[ 8979], 10.00th=[11076], 20.00th=[12387], 00:18:58.543 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14746], 60.00th=[17171], 00:18:58.543 | 70.00th=[21103], 80.00th=[24249], 90.00th=[32637], 95.00th=[38536], 00:18:58.543 | 99.00th=[44303], 99.50th=[44303], 99.90th=[47449], 99.95th=[47973], 00:18:58.543 | 99.99th=[49021] 00:18:58.543 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:18:58.543 slat (usec): min=3, max=40721, avg=124.74, stdev=1018.65 00:18:58.543 clat (usec): min=6244, max=52651, avg=17794.54, stdev=8137.77 00:18:58.543 lat (usec): min=6250, max=52658, avg=17919.28, stdev=8195.08 00:18:58.543 clat percentiles (usec): 00:18:58.543 | 1.00th=[ 6587], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[11731], 00:18:58.543 | 30.00th=[12649], 40.00th=[13829], 50.00th=[16581], 60.00th=[18744], 00:18:58.543 | 70.00th=[20055], 80.00th=[22676], 90.00th=[24773], 95.00th=[29754], 00:18:58.543 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:18:58.543 | 99.99th=[52691] 00:18:58.543 bw ( KiB/s): min=12888, max=15784, per=23.08%, avg=14336.00, stdev=2047.78, samples=2 00:18:58.543 iops : min= 3222, max= 3946, avg=3584.00, stdev=511.95, samples=2 00:18:58.543 lat (usec) : 1000=0.01% 00:18:58.543 lat (msec) : 10=7.47%, 20=59.87%, 50=31.79%, 100=0.87% 00:18:58.543 cpu : usr=2.69%, sys=5.08%, ctx=250, majf=0, minf=1 00:18:58.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:58.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.543 issued rwts: total=3435,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.543 job3: (groupid=0, jobs=1): err= 0: pid=3198639: Sat Jul 13 20:06:45 2024 00:18:58.543 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:18:58.543 slat (usec): min=3, max=46703, avg=152.80, stdev=1016.91 00:18:58.543 clat (usec): min=9829, max=73746, avg=19860.44, stdev=11048.36 00:18:58.543 lat (usec): min=10188, max=73753, avg=20013.23, stdev=11099.59 00:18:58.543 clat percentiles (usec): 00:18:58.543 | 1.00th=[11207], 5.00th=[12256], 10.00th=[12780], 20.00th=[13698], 00:18:58.543 | 30.00th=[14484], 40.00th=[15270], 50.00th=[15926], 60.00th=[16909], 00:18:58.543 | 70.00th=[20055], 80.00th=[24773], 90.00th=[28705], 95.00th=[33424], 00:18:58.543 | 99.00th=[70779], 99.50th=[73925], 99.90th=[73925], 99.95th=[73925], 00:18:58.543 | 99.99th=[73925] 00:18:58.543 write: IOPS=3630, BW=14.2MiB/s (14.9MB/s)(14.2MiB/1004msec); 0 zone resets 00:18:58.543 slat (usec): min=4, max=7643, avg=112.68, stdev=567.63 00:18:58.543 clat (usec): min=2398, max=30249, avg=15270.19, stdev=5133.02 00:18:58.543 lat (usec): min=6368, max=30255, avg=15382.87, stdev=5149.87 00:18:58.543 clat percentiles (usec): 00:18:58.543 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[10945], 20.00th=[11469], 00:18:58.543 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13304], 60.00th=[14091], 00:18:58.543 | 70.00th=[15270], 80.00th=[20579], 90.00th=[24511], 95.00th=[25297], 00:18:58.543 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30278], 99.95th=[30278], 00:18:58.543 | 99.99th=[30278] 00:18:58.543 bw ( KiB/s): min= 8768, max=19904, per=23.08%, avg=14336.00, stdev=7874.34, samples=2 00:18:58.543 iops : min= 2192, max= 4976, avg=3584.00, stdev=1968.59, samples=2 00:18:58.543 lat (msec) : 4=0.01%, 10=1.56%, 20=72.61%, 50=24.06%, 100=1.76% 00:18:58.543 cpu : usr=5.08%, sys=8.67%, ctx=403, majf=0, minf=1 00:18:58.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:58.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:58.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:58.543 issued rwts: total=3584,3645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:58.543 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:58.543 00:18:58.543 Run status group 0 (all jobs): 00:18:58.543 READ: bw=57.1MiB/s (59.9MB/s), 9.97MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=57.4MiB (60.2MB), run=1003-1005msec 00:18:58.543 WRITE: bw=60.7MiB/s (63.6MB/s), 11.8MiB/s-20.9MiB/s (12.3MB/s-21.9MB/s), io=61.0MiB (63.9MB), run=1003-1005msec 00:18:58.543 00:18:58.543 Disk stats (read/write): 00:18:58.543 nvme0n1: ios=2068/2312, merge=0/0, ticks=17155/19195, in_queue=36350, util=89.58% 00:18:58.543 nvme0n2: ios=4388/4608, merge=0/0, ticks=21620/19360, in_queue=40980, util=98.38% 00:18:58.543 nvme0n3: ios=2827/3072, merge=0/0, ticks=24685/21096, in_queue=45781, util=86.34% 00:18:58.543 nvme0n4: ios=3193/3584, merge=0/0, ticks=13228/12641, in_queue=25869, util=95.90% 00:18:58.543 20:06:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:58.543 20:06:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3198776 00:18:58.543 20:06:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:58.543 20:06:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:58.543 [global] 00:18:58.543 thread=1 00:18:58.543 invalidate=1 00:18:58.543 rw=read 00:18:58.543 time_based=1 00:18:58.543 runtime=10 00:18:58.543 ioengine=libaio 00:18:58.543 direct=1 00:18:58.543 bs=4096 00:18:58.543 iodepth=1 00:18:58.543 norandommap=1 00:18:58.543 numjobs=1 00:18:58.543 00:18:58.543 [job0] 00:18:58.543 filename=/dev/nvme0n1 00:18:58.543 [job1] 00:18:58.543 filename=/dev/nvme0n2 00:18:58.543 [job2] 00:18:58.543 filename=/dev/nvme0n3 00:18:58.543 [job3] 00:18:58.543 filename=/dev/nvme0n4 00:18:58.543 Could not set queue depth (nvme0n1) 00:18:58.543 Could not set queue depth (nvme0n2) 00:18:58.543 Could not set queue depth (nvme0n3) 00:18:58.543 Could not set queue depth (nvme0n4) 00:18:58.543 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.543 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.543 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.543 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.543 fio-3.35 00:18:58.543 Starting 4 threads 00:19:01.835 20:06:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:01.835 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:01.835 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=315392, buflen=4096 00:19:01.835 fio: pid=3198869, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:01.835 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:01.835 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:01.835 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20824064, buflen=4096 00:19:01.835 fio: pid=3198868, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:02.093 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=598016, buflen=4096 00:19:02.093 fio: pid=3198866, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:02.093 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.093 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:02.352 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=27095040, buflen=4096 00:19:02.352 fio: pid=3198867, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:02.352 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.352 20:06:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:02.352 00:19:02.352 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3198866: Sat Jul 13 20:06:49 2024 00:19:02.352 read: IOPS=42, BW=170KiB/s (175kB/s)(584KiB/3426msec) 00:19:02.352 slat (usec): min=7, max=8400, avg=97.76, stdev=730.27 00:19:02.352 clat (usec): min=309, max=42389, avg=23199.91, stdev=20612.05 00:19:02.352 lat (usec): min=317, max=50684, avg=23298.25, stdev=20674.31 00:19:02.352 clat percentiles (usec): 00:19:02.352 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 416], 00:19:02.352 | 30.00th=[ 465], 40.00th=[ 523], 50.00th=[41157], 60.00th=[41681], 00:19:02.352 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:02.352 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:02.352 | 99.99th=[42206] 00:19:02.352 bw ( KiB/s): min= 96, max= 368, per=1.39%, avg=180.00, stdev=110.82, samples=6 00:19:02.352 iops : min= 24, max= 92, avg=45.00, stdev=27.71, samples=6 00:19:02.352 lat (usec) : 500=37.41%, 750=3.40%, 1000=2.72% 00:19:02.352 lat (msec) : 2=0.68%, 20=0.68%, 50=54.42% 00:19:02.352 cpu : usr=0.00%, sys=0.15%, ctx=152, majf=0, minf=1 00:19:02.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.352 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3198867: Sat Jul 13 20:06:49 2024 00:19:02.352 read: IOPS=1791, BW=7163KiB/s (7335kB/s)(25.8MiB/3694msec) 00:19:02.352 slat (usec): min=4, max=27010, avg=24.98, stdev=467.54 00:19:02.352 clat (usec): min=293, max=42027, avg=530.58, stdev=2146.51 00:19:02.352 lat (usec): min=299, max=42044, avg=554.47, stdev=2195.62 00:19:02.352 clat percentiles (usec): 00:19:02.352 | 1.00th=[ 318], 5.00th=[ 367], 10.00th=[ 375], 20.00th=[ 383], 00:19:02.352 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 416], 60.00th=[ 429], 00:19:02.352 | 70.00th=[ 441], 80.00th=[ 457], 90.00th=[ 469], 95.00th=[ 478], 00:19:02.352 | 99.00th=[ 545], 99.50th=[ 553], 99.90th=[42206], 99.95th=[42206], 00:19:02.352 | 99.99th=[42206] 00:19:02.352 bw ( KiB/s): min= 2160, max= 9656, per=56.85%, avg=7339.57, stdev=2420.67, samples=7 00:19:02.352 iops : min= 540, max= 2414, avg=1834.86, stdev=605.16, samples=7 00:19:02.352 lat (usec) : 500=97.29%, 750=2.42% 00:19:02.352 lat (msec) : 50=0.27% 00:19:02.352 cpu : usr=0.87%, sys=3.82%, ctx=6621, majf=0, minf=1 00:19:02.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 issued rwts: total=6616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.352 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3198868: Sat Jul 13 20:06:49 2024 00:19:02.352 read: IOPS=1603, BW=6413KiB/s (6567kB/s)(19.9MiB/3171msec) 00:19:02.352 slat (usec): min=5, max=4899, avg=21.96, stdev=69.28 00:19:02.352 clat (usec): min=314, max=42206, avg=591.04, stdev=961.52 00:19:02.352 lat (usec): min=320, max=42221, avg=613.01, stdev=997.70 00:19:02.352 clat percentiles (usec): 00:19:02.352 | 1.00th=[ 371], 5.00th=[ 396], 10.00th=[ 412], 20.00th=[ 437], 00:19:02.352 | 30.00th=[ 453], 40.00th=[ 478], 50.00th=[ 570], 60.00th=[ 627], 00:19:02.352 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 766], 95.00th=[ 791], 00:19:02.352 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 898], 99.95th=[34866], 00:19:02.352 | 99.99th=[42206] 00:19:02.352 bw ( KiB/s): min= 5584, max= 7664, per=49.33%, avg=6368.00, stdev=738.41, samples=6 00:19:02.352 iops : min= 1396, max= 1916, avg=1592.00, stdev=184.60, samples=6 00:19:02.352 lat (usec) : 500=45.13%, 750=42.12%, 1000=12.66% 00:19:02.352 lat (msec) : 50=0.06% 00:19:02.352 cpu : usr=1.42%, sys=4.16%, ctx=5086, majf=0, minf=1 00:19:02.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 issued rwts: total=5085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.352 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3198869: Sat Jul 13 20:06:49 2024 00:19:02.352 read: IOPS=26, BW=106KiB/s (108kB/s)(308KiB/2917msec) 00:19:02.352 slat (nsec): min=8737, max=38900, avg=23210.81, stdev=9915.59 00:19:02.352 clat (usec): min=480, max=42406, avg=37557.96, stdev=12696.07 00:19:02.352 lat (usec): min=493, max=42419, avg=37581.02, stdev=12698.31 00:19:02.352 clat percentiles (usec): 00:19:02.352 | 1.00th=[ 482], 5.00th=[ 529], 10.00th=[ 545], 20.00th=[41681], 00:19:02.352 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:02.352 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:02.352 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:02.352 | 99.99th=[42206] 00:19:02.352 bw ( KiB/s): min= 96, max= 120, per=0.77%, avg=100.80, stdev=10.73, samples=5 00:19:02.352 iops : min= 24, max= 30, avg=25.20, stdev= 2.68, samples=5 00:19:02.352 lat (usec) : 500=1.28%, 750=8.97% 00:19:02.352 lat (msec) : 50=88.46% 00:19:02.352 cpu : usr=0.00%, sys=0.14%, ctx=78, majf=0, minf=1 00:19:02.352 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.352 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.352 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.352 00:19:02.352 Run status group 0 (all jobs): 00:19:02.352 READ: bw=12.6MiB/s (13.2MB/s), 106KiB/s-7163KiB/s (108kB/s-7335kB/s), io=46.6MiB (48.8MB), run=2917-3694msec 00:19:02.352 00:19:02.352 Disk stats (read/write): 00:19:02.352 nvme0n1: ios=163/0, merge=0/0, ticks=3405/0, in_queue=3405, util=97.91% 00:19:02.352 nvme0n2: ios=6612/0, merge=0/0, ticks=3296/0, in_queue=3296, util=94.37% 00:19:02.352 nvme0n3: ios=4982/0, merge=0/0, ticks=2892/0, in_queue=2892, util=96.63% 00:19:02.352 nvme0n4: ios=75/0, merge=0/0, ticks=2852/0, in_queue=2852, util=96.74% 00:19:02.610 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.610 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:02.868 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:02.868 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:03.126 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:03.126 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:03.384 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:03.384 20:06:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:03.642 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:03.642 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3198776 00:19:03.642 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:03.642 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:03.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:03.900 nvmf hotplug test: fio failed as expected 00:19:03.900 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:04.158 rmmod nvme_tcp 00:19:04.158 rmmod nvme_fabrics 00:19:04.158 rmmod nvme_keyring 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3196869 ']' 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3196869 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 3196869 ']' 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 3196869 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3196869 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3196869' 00:19:04.158 killing process with pid 3196869 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 3196869 00:19:04.158 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 3196869 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:04.416 20:06:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.945 20:06:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:06.945 00:19:06.945 real 0m23.330s 00:19:06.945 user 1m19.335s 00:19:06.945 sys 0m7.325s 00:19:06.945 20:06:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:06.945 20:06:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.945 ************************************ 00:19:06.945 END TEST nvmf_fio_target 00:19:06.945 ************************************ 00:19:06.945 20:06:54 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:06.945 20:06:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:06.945 20:06:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:06.945 20:06:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:06.945 ************************************ 00:19:06.945 START TEST nvmf_bdevio 00:19:06.945 ************************************ 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:06.945 * Looking for test storage... 00:19:06.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:06.945 20:06:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:08.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:08.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.317 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:08.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:08.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.318 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.576 20:06:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:19:08.576 00:19:08.576 --- 10.0.0.2 ping statistics --- 00:19:08.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.576 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:19:08.576 00:19:08.576 --- 10.0.0.1 ping statistics --- 00:19:08.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.576 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3201486 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3201486 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 3201486 ']' 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:08.576 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:08.576 [2024-07-13 20:06:56.182535] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:08.576 [2024-07-13 20:06:56.182617] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.576 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.834 [2024-07-13 20:06:56.258297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.834 [2024-07-13 20:06:56.355446] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.834 [2024-07-13 20:06:56.355514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.834 [2024-07-13 20:06:56.355538] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.834 [2024-07-13 20:06:56.355551] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.834 [2024-07-13 20:06:56.355563] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.834 [2024-07-13 20:06:56.355652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:08.834 [2024-07-13 20:06:56.355715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:08.834 [2024-07-13 20:06:56.355767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:08.834 [2024-07-13 20:06:56.355770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.834 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:08.834 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:19:08.834 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.834 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.834 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.091 [2024-07-13 20:06:56.511724] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.091 Malloc0 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.091 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:09.092 [2024-07-13 20:06:56.564263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:09.092 { 00:19:09.092 "params": { 00:19:09.092 "name": "Nvme$subsystem", 00:19:09.092 "trtype": "$TEST_TRANSPORT", 00:19:09.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:09.092 "adrfam": "ipv4", 00:19:09.092 "trsvcid": "$NVMF_PORT", 00:19:09.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:09.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:09.092 "hdgst": ${hdgst:-false}, 00:19:09.092 "ddgst": ${ddgst:-false} 00:19:09.092 }, 00:19:09.092 "method": "bdev_nvme_attach_controller" 00:19:09.092 } 00:19:09.092 EOF 00:19:09.092 )") 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:09.092 20:06:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:09.092 "params": { 00:19:09.092 "name": "Nvme1", 00:19:09.092 "trtype": "tcp", 00:19:09.092 "traddr": "10.0.0.2", 00:19:09.092 "adrfam": "ipv4", 00:19:09.092 "trsvcid": "4420", 00:19:09.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.092 "hdgst": false, 00:19:09.092 "ddgst": false 00:19:09.092 }, 00:19:09.092 "method": "bdev_nvme_attach_controller" 00:19:09.092 }' 00:19:09.092 [2024-07-13 20:06:56.614537] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:19:09.092 [2024-07-13 20:06:56.614623] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3201596 ] 00:19:09.092 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.092 [2024-07-13 20:06:56.677801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.349 [2024-07-13 20:06:56.771278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.349 [2024-07-13 20:06:56.771329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.349 [2024-07-13 20:06:56.771333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.349 I/O targets: 00:19:09.349 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:09.349 00:19:09.349 00:19:09.349 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.349 http://cunit.sourceforge.net/ 00:19:09.349 00:19:09.349 00:19:09.349 Suite: bdevio tests on: Nvme1n1 00:19:09.606 Test: blockdev write read block ...passed 00:19:09.606 Test: blockdev write zeroes read block ...passed 00:19:09.606 Test: blockdev write zeroes read no split ...passed 00:19:09.606 Test: blockdev write zeroes read split ...passed 00:19:09.606 Test: blockdev write zeroes read split partial ...passed 00:19:09.606 Test: blockdev reset ...[2024-07-13 20:06:57.201788] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.606 [2024-07-13 20:06:57.201904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeada00 (9): Bad file descriptor 00:19:09.606 [2024-07-13 20:06:57.216306] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.606 passed 00:19:09.606 Test: blockdev write read 8 blocks ...passed 00:19:09.606 Test: blockdev write read size > 128k ...passed 00:19:09.606 Test: blockdev write read invalid size ...passed 00:19:09.863 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:09.863 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:09.863 Test: blockdev write read max offset ...passed 00:19:09.863 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:09.863 Test: blockdev writev readv 8 blocks ...passed 00:19:09.863 Test: blockdev writev readv 30 x 1block ...passed 00:19:09.863 Test: blockdev writev readv block ...passed 00:19:09.863 Test: blockdev writev readv size > 128k ...passed 00:19:09.863 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:09.863 Test: blockdev comparev and writev ...[2024-07-13 20:06:57.434691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.434726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.434751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.434768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.435159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.435184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.435214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.435231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.435601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.435624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.435646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.435662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.436074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.436098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.436119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:09.863 [2024-07-13 20:06:57.436136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.863 passed 00:19:09.863 Test: blockdev nvme passthru rw ...passed 00:19:09.863 Test: blockdev nvme passthru vendor specific ...[2024-07-13 20:06:57.518249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.863 [2024-07-13 20:06:57.518278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.518491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.863 [2024-07-13 20:06:57.518514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.518725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.863 [2024-07-13 20:06:57.518748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.863 [2024-07-13 20:06:57.518962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:09.863 [2024-07-13 20:06:57.518986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.863 passed 00:19:10.121 Test: blockdev nvme admin passthru ...passed 00:19:10.121 Test: blockdev copy ...passed 00:19:10.121 00:19:10.121 Run Summary: Type Total Ran Passed Failed Inactive 00:19:10.121 suites 1 1 n/a 0 0 00:19:10.121 tests 23 23 23 0 0 00:19:10.121 asserts 152 152 152 0 n/a 00:19:10.121 00:19:10.121 Elapsed time = 1.157 seconds 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.121 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:10.121 rmmod nvme_tcp 00:19:10.379 rmmod nvme_fabrics 00:19:10.379 rmmod nvme_keyring 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3201486 ']' 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3201486 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 3201486 ']' 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 3201486 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3201486 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3201486' 00:19:10.379 killing process with pid 3201486 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 3201486 00:19:10.379 20:06:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 3201486 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.638 20:06:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.540 20:07:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.540 00:19:12.540 real 0m6.125s 00:19:12.540 user 0m9.715s 00:19:12.540 sys 0m2.052s 00:19:12.540 20:07:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:12.540 20:07:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.540 ************************************ 00:19:12.540 END TEST nvmf_bdevio 00:19:12.540 ************************************ 00:19:12.540 20:07:00 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:12.540 20:07:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:12.540 20:07:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:12.540 20:07:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:12.798 ************************************ 00:19:12.798 START TEST nvmf_auth_target 00:19:12.798 ************************************ 00:19:12.798 20:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:12.798 * Looking for test storage... 00:19:12.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.798 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.798 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:12.798 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.799 20:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:14.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.696 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:14.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:14.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:14.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.697 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:14.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:19:14.955 00:19:14.955 --- 10.0.0.2 ping statistics --- 00:19:14.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.955 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:19:14.955 00:19:14.955 --- 10.0.0.1 ping statistics --- 00:19:14.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.955 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3203697 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3203697 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3203697 ']' 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.955 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.220 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3203722 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=141658b1cf2af8fa2d04bd85b1bd541f7eecb720828a0939 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DLW 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 141658b1cf2af8fa2d04bd85b1bd541f7eecb720828a0939 0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 141658b1cf2af8fa2d04bd85b1bd541f7eecb720828a0939 0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=141658b1cf2af8fa2d04bd85b1bd541f7eecb720828a0939 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DLW 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DLW 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DLW 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5c2b2c886d7a13a6966789f844928eab9f82b2c47efb29954b4f82bd0e460174 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZY0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5c2b2c886d7a13a6966789f844928eab9f82b2c47efb29954b4f82bd0e460174 3 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5c2b2c886d7a13a6966789f844928eab9f82b2c47efb29954b4f82bd0e460174 3 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5c2b2c886d7a13a6966789f844928eab9f82b2c47efb29954b4f82bd0e460174 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZY0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZY0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ZY0 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a3606f5925530d5e95a2dbe9a17d743d 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0mC 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a3606f5925530d5e95a2dbe9a17d743d 1 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a3606f5925530d5e95a2dbe9a17d743d 1 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a3606f5925530d5e95a2dbe9a17d743d 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:15.221 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0mC 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0mC 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0mC 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b1ad89889dbe1f4dc9097a633acc3772be40b85e4b02c9b9 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LyW 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b1ad89889dbe1f4dc9097a633acc3772be40b85e4b02c9b9 2 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b1ad89889dbe1f4dc9097a633acc3772be40b85e4b02c9b9 2 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b1ad89889dbe1f4dc9097a633acc3772be40b85e4b02c9b9 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LyW 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LyW 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.LyW 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=73b7283cfe3b6c10eab93c7ed5d0825251e6fffd94312374 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qJ3 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 73b7283cfe3b6c10eab93c7ed5d0825251e6fffd94312374 2 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 73b7283cfe3b6c10eab93c7ed5d0825251e6fffd94312374 2 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=73b7283cfe3b6c10eab93c7ed5d0825251e6fffd94312374 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.479 20:07:02 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qJ3 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qJ3 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.qJ3 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d14c0b749a949cd03dc3ac1bb47c0b0a 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6Fz 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d14c0b749a949cd03dc3ac1bb47c0b0a 1 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d14c0b749a949cd03dc3ac1bb47c0b0a 1 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d14c0b749a949cd03dc3ac1bb47c0b0a 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6Fz 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6Fz 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.6Fz 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:15.479 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c99ea92088fe29375cc2ab0cec0ddc0ac691e0e279e3da307b6b3f7750dc22ff 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kkZ 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c99ea92088fe29375cc2ab0cec0ddc0ac691e0e279e3da307b6b3f7750dc22ff 3 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c99ea92088fe29375cc2ab0cec0ddc0ac691e0e279e3da307b6b3f7750dc22ff 3 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c99ea92088fe29375cc2ab0cec0ddc0ac691e0e279e3da307b6b3f7750dc22ff 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kkZ 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kkZ 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.kkZ 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3203697 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3203697 ']' 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:15.480 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3203722 /var/tmp/host.sock 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3203722 ']' 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:15.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:15.738 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DLW 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.996 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DLW 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DLW 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ZY0 ]] 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZY0 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.254 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.511 20:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.511 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZY0 00:19:16.511 20:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZY0 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0mC 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0mC 00:19:16.511 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0mC 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.LyW ]] 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LyW 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LyW 00:19:16.769 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LyW 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.qJ3 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.qJ3 00:19:17.027 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.qJ3 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.6Fz ]] 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Fz 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Fz 00:19:17.285 20:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6Fz 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kkZ 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kkZ 00:19:17.543 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kkZ 00:19:17.801 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:17.801 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:17.801 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:17.801 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.801 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:17.801 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.060 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.317 00:19:18.575 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.575 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.575 20:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.833 { 00:19:18.833 "cntlid": 1, 00:19:18.833 "qid": 0, 00:19:18.833 "state": "enabled", 00:19:18.833 "listen_address": { 00:19:18.833 "trtype": "TCP", 00:19:18.833 "adrfam": "IPv4", 00:19:18.833 "traddr": "10.0.0.2", 00:19:18.833 "trsvcid": "4420" 00:19:18.833 }, 00:19:18.833 "peer_address": { 00:19:18.833 "trtype": "TCP", 00:19:18.833 "adrfam": "IPv4", 00:19:18.833 "traddr": "10.0.0.1", 00:19:18.833 "trsvcid": "39422" 00:19:18.833 }, 00:19:18.833 "auth": { 00:19:18.833 "state": "completed", 00:19:18.833 "digest": "sha256", 00:19:18.833 "dhgroup": "null" 00:19:18.833 } 00:19:18.833 } 00:19:18.833 ]' 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.833 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.090 20:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.024 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.281 20:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.539 00:19:20.539 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.539 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.539 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.797 { 00:19:20.797 "cntlid": 3, 00:19:20.797 "qid": 0, 00:19:20.797 "state": "enabled", 00:19:20.797 "listen_address": { 00:19:20.797 "trtype": "TCP", 00:19:20.797 "adrfam": "IPv4", 00:19:20.797 "traddr": "10.0.0.2", 00:19:20.797 "trsvcid": "4420" 00:19:20.797 }, 00:19:20.797 "peer_address": { 00:19:20.797 "trtype": "TCP", 00:19:20.797 "adrfam": "IPv4", 00:19:20.797 "traddr": "10.0.0.1", 00:19:20.797 "trsvcid": "57906" 00:19:20.797 }, 00:19:20.797 "auth": { 00:19:20.797 "state": "completed", 00:19:20.797 "digest": "sha256", 00:19:20.797 "dhgroup": "null" 00:19:20.797 } 00:19:20.797 } 00:19:20.797 ]' 00:19:20.797 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.054 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.311 20:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.243 20:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.501 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.758 00:19:22.758 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.758 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.758 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.015 { 00:19:23.015 "cntlid": 5, 00:19:23.015 "qid": 0, 00:19:23.015 "state": "enabled", 00:19:23.015 "listen_address": { 00:19:23.015 "trtype": "TCP", 00:19:23.015 "adrfam": "IPv4", 00:19:23.015 "traddr": "10.0.0.2", 00:19:23.015 "trsvcid": "4420" 00:19:23.015 }, 00:19:23.015 "peer_address": { 00:19:23.015 "trtype": "TCP", 00:19:23.015 "adrfam": "IPv4", 00:19:23.015 "traddr": "10.0.0.1", 00:19:23.015 "trsvcid": "57936" 00:19:23.015 }, 00:19:23.015 "auth": { 00:19:23.015 "state": "completed", 00:19:23.015 "digest": "sha256", 00:19:23.015 "dhgroup": "null" 00:19:23.015 } 00:19:23.015 } 00:19:23.015 ]' 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:23.015 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.272 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:23.272 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.272 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.272 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.272 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.529 20:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.462 20:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.720 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.978 00:19:24.978 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.978 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.978 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.236 { 00:19:25.236 "cntlid": 7, 00:19:25.236 "qid": 0, 00:19:25.236 "state": "enabled", 00:19:25.236 "listen_address": { 00:19:25.236 "trtype": "TCP", 00:19:25.236 "adrfam": "IPv4", 00:19:25.236 "traddr": "10.0.0.2", 00:19:25.236 "trsvcid": "4420" 00:19:25.236 }, 00:19:25.236 "peer_address": { 00:19:25.236 "trtype": "TCP", 00:19:25.236 "adrfam": "IPv4", 00:19:25.236 "traddr": "10.0.0.1", 00:19:25.236 "trsvcid": "57944" 00:19:25.236 }, 00:19:25.236 "auth": { 00:19:25.236 "state": "completed", 00:19:25.236 "digest": "sha256", 00:19:25.236 "dhgroup": "null" 00:19:25.236 } 00:19:25.236 } 00:19:25.236 ]' 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.236 20:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.494 20:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.866 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.123 00:19:27.380 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.380 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.380 20:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.638 { 00:19:27.638 "cntlid": 9, 00:19:27.638 "qid": 0, 00:19:27.638 "state": "enabled", 00:19:27.638 "listen_address": { 00:19:27.638 "trtype": "TCP", 00:19:27.638 "adrfam": "IPv4", 00:19:27.638 "traddr": "10.0.0.2", 00:19:27.638 "trsvcid": "4420" 00:19:27.638 }, 00:19:27.638 "peer_address": { 00:19:27.638 "trtype": "TCP", 00:19:27.638 "adrfam": "IPv4", 00:19:27.638 "traddr": "10.0.0.1", 00:19:27.638 "trsvcid": "57962" 00:19:27.638 }, 00:19:27.638 "auth": { 00:19:27.638 "state": "completed", 00:19:27.638 "digest": "sha256", 00:19:27.638 "dhgroup": "ffdhe2048" 00:19:27.638 } 00:19:27.638 } 00:19:27.638 ]' 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.638 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.895 20:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:28.825 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.083 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.373 00:19:29.373 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.373 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.373 20:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.637 { 00:19:29.637 "cntlid": 11, 00:19:29.637 "qid": 0, 00:19:29.637 "state": "enabled", 00:19:29.637 "listen_address": { 00:19:29.637 "trtype": "TCP", 00:19:29.637 "adrfam": "IPv4", 00:19:29.637 "traddr": "10.0.0.2", 00:19:29.637 "trsvcid": "4420" 00:19:29.637 }, 00:19:29.637 "peer_address": { 00:19:29.637 "trtype": "TCP", 00:19:29.637 "adrfam": "IPv4", 00:19:29.637 "traddr": "10.0.0.1", 00:19:29.637 "trsvcid": "57976" 00:19:29.637 }, 00:19:29.637 "auth": { 00:19:29.637 "state": "completed", 00:19:29.637 "digest": "sha256", 00:19:29.637 "dhgroup": "ffdhe2048" 00:19:29.637 } 00:19:29.637 } 00:19:29.637 ]' 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.637 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.895 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.895 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.895 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.895 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.895 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.153 20:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.087 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.345 20:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.602 00:19:31.602 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.602 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.602 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.860 { 00:19:31.860 "cntlid": 13, 00:19:31.860 "qid": 0, 00:19:31.860 "state": "enabled", 00:19:31.860 "listen_address": { 00:19:31.860 "trtype": "TCP", 00:19:31.860 "adrfam": "IPv4", 00:19:31.860 "traddr": "10.0.0.2", 00:19:31.860 "trsvcid": "4420" 00:19:31.860 }, 00:19:31.860 "peer_address": { 00:19:31.860 "trtype": "TCP", 00:19:31.860 "adrfam": "IPv4", 00:19:31.860 "traddr": "10.0.0.1", 00:19:31.860 "trsvcid": "47900" 00:19:31.860 }, 00:19:31.860 "auth": { 00:19:31.860 "state": "completed", 00:19:31.860 "digest": "sha256", 00:19:31.860 "dhgroup": "ffdhe2048" 00:19:31.860 } 00:19:31.860 } 00:19:31.860 ]' 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.860 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.118 20:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:19:33.051 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.309 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.567 20:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:33.825 00:19:33.825 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.825 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.825 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.083 { 00:19:34.083 "cntlid": 15, 00:19:34.083 "qid": 0, 00:19:34.083 "state": "enabled", 00:19:34.083 "listen_address": { 00:19:34.083 "trtype": "TCP", 00:19:34.083 "adrfam": "IPv4", 00:19:34.083 "traddr": "10.0.0.2", 00:19:34.083 "trsvcid": "4420" 00:19:34.083 }, 00:19:34.083 "peer_address": { 00:19:34.083 "trtype": "TCP", 00:19:34.083 "adrfam": "IPv4", 00:19:34.083 "traddr": "10.0.0.1", 00:19:34.083 "trsvcid": "47916" 00:19:34.083 }, 00:19:34.083 "auth": { 00:19:34.083 "state": "completed", 00:19:34.083 "digest": "sha256", 00:19:34.083 "dhgroup": "ffdhe2048" 00:19:34.083 } 00:19:34.083 } 00:19:34.083 ]' 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.083 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.341 20:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:19:35.274 20:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.274 20:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.274 20:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.274 20:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.530 20:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.531 20:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.531 20:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.531 20:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.531 20:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.788 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.046 00:19:36.046 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.046 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.046 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.303 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.303 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.303 20:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.303 20:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.303 20:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.303 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.303 { 00:19:36.303 "cntlid": 17, 00:19:36.303 "qid": 0, 00:19:36.303 "state": "enabled", 00:19:36.303 "listen_address": { 00:19:36.303 "trtype": "TCP", 00:19:36.303 "adrfam": "IPv4", 00:19:36.303 "traddr": "10.0.0.2", 00:19:36.303 "trsvcid": "4420" 00:19:36.303 }, 00:19:36.303 "peer_address": { 00:19:36.303 "trtype": "TCP", 00:19:36.303 "adrfam": "IPv4", 00:19:36.303 "traddr": "10.0.0.1", 00:19:36.303 "trsvcid": "47942" 00:19:36.303 }, 00:19:36.303 "auth": { 00:19:36.303 "state": "completed", 00:19:36.303 "digest": "sha256", 00:19:36.303 "dhgroup": "ffdhe3072" 00:19:36.304 } 00:19:36.304 } 00:19:36.304 ]' 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.304 20:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.869 20:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:37.802 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.060 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.318 00:19:38.318 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.318 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.318 20:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.576 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.576 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.576 20:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.576 20:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.576 20:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.576 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.576 { 00:19:38.576 "cntlid": 19, 00:19:38.576 "qid": 0, 00:19:38.576 "state": "enabled", 00:19:38.576 "listen_address": { 00:19:38.576 "trtype": "TCP", 00:19:38.576 "adrfam": "IPv4", 00:19:38.576 "traddr": "10.0.0.2", 00:19:38.576 "trsvcid": "4420" 00:19:38.576 }, 00:19:38.577 "peer_address": { 00:19:38.577 "trtype": "TCP", 00:19:38.577 "adrfam": "IPv4", 00:19:38.577 "traddr": "10.0.0.1", 00:19:38.577 "trsvcid": "47968" 00:19:38.577 }, 00:19:38.577 "auth": { 00:19:38.577 "state": "completed", 00:19:38.577 "digest": "sha256", 00:19:38.577 "dhgroup": "ffdhe3072" 00:19:38.577 } 00:19:38.577 } 00:19:38.577 ]' 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.577 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.835 20:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.770 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.029 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.595 00:19:40.595 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.595 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.595 20:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.595 { 00:19:40.595 "cntlid": 21, 00:19:40.595 "qid": 0, 00:19:40.595 "state": "enabled", 00:19:40.595 "listen_address": { 00:19:40.595 "trtype": "TCP", 00:19:40.595 "adrfam": "IPv4", 00:19:40.595 "traddr": "10.0.0.2", 00:19:40.595 "trsvcid": "4420" 00:19:40.595 }, 00:19:40.595 "peer_address": { 00:19:40.595 "trtype": "TCP", 00:19:40.595 "adrfam": "IPv4", 00:19:40.595 "traddr": "10.0.0.1", 00:19:40.595 "trsvcid": "39518" 00:19:40.595 }, 00:19:40.595 "auth": { 00:19:40.595 "state": "completed", 00:19:40.595 "digest": "sha256", 00:19:40.595 "dhgroup": "ffdhe3072" 00:19:40.595 } 00:19:40.595 } 00:19:40.595 ]' 00:19:40.595 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.853 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.111 20:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:19:42.044 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.045 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.303 20:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:42.562 00:19:42.562 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.562 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.562 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.820 { 00:19:42.820 "cntlid": 23, 00:19:42.820 "qid": 0, 00:19:42.820 "state": "enabled", 00:19:42.820 "listen_address": { 00:19:42.820 "trtype": "TCP", 00:19:42.820 "adrfam": "IPv4", 00:19:42.820 "traddr": "10.0.0.2", 00:19:42.820 "trsvcid": "4420" 00:19:42.820 }, 00:19:42.820 "peer_address": { 00:19:42.820 "trtype": "TCP", 00:19:42.820 "adrfam": "IPv4", 00:19:42.820 "traddr": "10.0.0.1", 00:19:42.820 "trsvcid": "39530" 00:19:42.820 }, 00:19:42.820 "auth": { 00:19:42.820 "state": "completed", 00:19:42.820 "digest": "sha256", 00:19:42.820 "dhgroup": "ffdhe3072" 00:19:42.820 } 00:19:42.820 } 00:19:42.820 ]' 00:19:42.820 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.077 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.077 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.077 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.078 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.078 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.078 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.078 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.336 20:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.270 20:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.561 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.126 00:19:45.126 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.126 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.126 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.127 { 00:19:45.127 "cntlid": 25, 00:19:45.127 "qid": 0, 00:19:45.127 "state": "enabled", 00:19:45.127 "listen_address": { 00:19:45.127 "trtype": "TCP", 00:19:45.127 "adrfam": "IPv4", 00:19:45.127 "traddr": "10.0.0.2", 00:19:45.127 "trsvcid": "4420" 00:19:45.127 }, 00:19:45.127 "peer_address": { 00:19:45.127 "trtype": "TCP", 00:19:45.127 "adrfam": "IPv4", 00:19:45.127 "traddr": "10.0.0.1", 00:19:45.127 "trsvcid": "39556" 00:19:45.127 }, 00:19:45.127 "auth": { 00:19:45.127 "state": "completed", 00:19:45.127 "digest": "sha256", 00:19:45.127 "dhgroup": "ffdhe4096" 00:19:45.127 } 00:19:45.127 } 00:19:45.127 ]' 00:19:45.127 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.386 20:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.642 20:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.574 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.832 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.400 00:19:47.400 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.400 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.400 20:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.400 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.400 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.400 20:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.400 20:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.658 { 00:19:47.658 "cntlid": 27, 00:19:47.658 "qid": 0, 00:19:47.658 "state": "enabled", 00:19:47.658 "listen_address": { 00:19:47.658 "trtype": "TCP", 00:19:47.658 "adrfam": "IPv4", 00:19:47.658 "traddr": "10.0.0.2", 00:19:47.658 "trsvcid": "4420" 00:19:47.658 }, 00:19:47.658 "peer_address": { 00:19:47.658 "trtype": "TCP", 00:19:47.658 "adrfam": "IPv4", 00:19:47.658 "traddr": "10.0.0.1", 00:19:47.658 "trsvcid": "39596" 00:19:47.658 }, 00:19:47.658 "auth": { 00:19:47.658 "state": "completed", 00:19:47.658 "digest": "sha256", 00:19:47.658 "dhgroup": "ffdhe4096" 00:19:47.658 } 00:19:47.658 } 00:19:47.658 ]' 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.658 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.916 20:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.852 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.109 20:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.675 00:19:49.675 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.675 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.675 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.934 { 00:19:49.934 "cntlid": 29, 00:19:49.934 "qid": 0, 00:19:49.934 "state": "enabled", 00:19:49.934 "listen_address": { 00:19:49.934 "trtype": "TCP", 00:19:49.934 "adrfam": "IPv4", 00:19:49.934 "traddr": "10.0.0.2", 00:19:49.934 "trsvcid": "4420" 00:19:49.934 }, 00:19:49.934 "peer_address": { 00:19:49.934 "trtype": "TCP", 00:19:49.934 "adrfam": "IPv4", 00:19:49.934 "traddr": "10.0.0.1", 00:19:49.934 "trsvcid": "39634" 00:19:49.934 }, 00:19:49.934 "auth": { 00:19:49.934 "state": "completed", 00:19:49.934 "digest": "sha256", 00:19:49.934 "dhgroup": "ffdhe4096" 00:19:49.934 } 00:19:49.934 } 00:19:49.934 ]' 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.934 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.192 20:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:19:51.127 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.127 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:51.127 20:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.127 20:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.127 20:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.128 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.128 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.128 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.384 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:51.384 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.384 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.384 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.384 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.385 20:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.642 00:19:51.901 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.901 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.901 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.159 { 00:19:52.159 "cntlid": 31, 00:19:52.159 "qid": 0, 00:19:52.159 "state": "enabled", 00:19:52.159 "listen_address": { 00:19:52.159 "trtype": "TCP", 00:19:52.159 "adrfam": "IPv4", 00:19:52.159 "traddr": "10.0.0.2", 00:19:52.159 "trsvcid": "4420" 00:19:52.159 }, 00:19:52.159 "peer_address": { 00:19:52.159 "trtype": "TCP", 00:19:52.159 "adrfam": "IPv4", 00:19:52.159 "traddr": "10.0.0.1", 00:19:52.159 "trsvcid": "51836" 00:19:52.159 }, 00:19:52.159 "auth": { 00:19:52.159 "state": "completed", 00:19:52.159 "digest": "sha256", 00:19:52.159 "dhgroup": "ffdhe4096" 00:19:52.159 } 00:19:52.159 } 00:19:52.159 ]' 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.159 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.416 20:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.354 20:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.612 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.181 00:19:54.439 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.439 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.439 20:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.439 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.439 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.439 20:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.439 20:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.698 { 00:19:54.698 "cntlid": 33, 00:19:54.698 "qid": 0, 00:19:54.698 "state": "enabled", 00:19:54.698 "listen_address": { 00:19:54.698 "trtype": "TCP", 00:19:54.698 "adrfam": "IPv4", 00:19:54.698 "traddr": "10.0.0.2", 00:19:54.698 "trsvcid": "4420" 00:19:54.698 }, 00:19:54.698 "peer_address": { 00:19:54.698 "trtype": "TCP", 00:19:54.698 "adrfam": "IPv4", 00:19:54.698 "traddr": "10.0.0.1", 00:19:54.698 "trsvcid": "51868" 00:19:54.698 }, 00:19:54.698 "auth": { 00:19:54.698 "state": "completed", 00:19:54.698 "digest": "sha256", 00:19:54.698 "dhgroup": "ffdhe6144" 00:19:54.698 } 00:19:54.698 } 00:19:54.698 ]' 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.698 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.957 20:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:55.891 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.456 20:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.023 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.023 { 00:19:57.023 "cntlid": 35, 00:19:57.023 "qid": 0, 00:19:57.023 "state": "enabled", 00:19:57.023 "listen_address": { 00:19:57.023 "trtype": "TCP", 00:19:57.023 "adrfam": "IPv4", 00:19:57.023 "traddr": "10.0.0.2", 00:19:57.023 "trsvcid": "4420" 00:19:57.023 }, 00:19:57.023 "peer_address": { 00:19:57.023 "trtype": "TCP", 00:19:57.023 "adrfam": "IPv4", 00:19:57.023 "traddr": "10.0.0.1", 00:19:57.023 "trsvcid": "51894" 00:19:57.023 }, 00:19:57.023 "auth": { 00:19:57.023 "state": "completed", 00:19:57.023 "digest": "sha256", 00:19:57.023 "dhgroup": "ffdhe6144" 00:19:57.023 } 00:19:57.023 } 00:19:57.023 ]' 00:19:57.023 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.280 20:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.536 20:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.467 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.725 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.291 00:19:59.291 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.291 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.291 20:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.582 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.582 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.582 20:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.582 20:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.582 20:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.582 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.582 { 00:19:59.582 "cntlid": 37, 00:19:59.582 "qid": 0, 00:19:59.582 "state": "enabled", 00:19:59.582 "listen_address": { 00:19:59.582 "trtype": "TCP", 00:19:59.582 "adrfam": "IPv4", 00:19:59.582 "traddr": "10.0.0.2", 00:19:59.582 "trsvcid": "4420" 00:19:59.582 }, 00:19:59.582 "peer_address": { 00:19:59.582 "trtype": "TCP", 00:19:59.582 "adrfam": "IPv4", 00:19:59.582 "traddr": "10.0.0.1", 00:19:59.582 "trsvcid": "51914" 00:19:59.582 }, 00:19:59.582 "auth": { 00:19:59.582 "state": "completed", 00:19:59.582 "digest": "sha256", 00:19:59.583 "dhgroup": "ffdhe6144" 00:19:59.583 } 00:19:59.583 } 00:19:59.583 ]' 00:19:59.583 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.583 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.583 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.583 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:59.583 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.859 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.859 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.859 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.117 20:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.052 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.311 20:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.877 00:20:01.877 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.877 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.877 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.135 { 00:20:02.135 "cntlid": 39, 00:20:02.135 "qid": 0, 00:20:02.135 "state": "enabled", 00:20:02.135 "listen_address": { 00:20:02.135 "trtype": "TCP", 00:20:02.135 "adrfam": "IPv4", 00:20:02.135 "traddr": "10.0.0.2", 00:20:02.135 "trsvcid": "4420" 00:20:02.135 }, 00:20:02.135 "peer_address": { 00:20:02.135 "trtype": "TCP", 00:20:02.135 "adrfam": "IPv4", 00:20:02.135 "traddr": "10.0.0.1", 00:20:02.135 "trsvcid": "46432" 00:20:02.135 }, 00:20:02.135 "auth": { 00:20:02.135 "state": "completed", 00:20:02.135 "digest": "sha256", 00:20:02.135 "dhgroup": "ffdhe6144" 00:20:02.135 } 00:20:02.135 } 00:20:02.135 ]' 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.135 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.396 20:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:20:03.774 20:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.774 20:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.712 00:20:04.712 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.712 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.712 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.970 { 00:20:04.970 "cntlid": 41, 00:20:04.970 "qid": 0, 00:20:04.970 "state": "enabled", 00:20:04.970 "listen_address": { 00:20:04.970 "trtype": "TCP", 00:20:04.970 "adrfam": "IPv4", 00:20:04.970 "traddr": "10.0.0.2", 00:20:04.970 "trsvcid": "4420" 00:20:04.970 }, 00:20:04.970 "peer_address": { 00:20:04.970 "trtype": "TCP", 00:20:04.970 "adrfam": "IPv4", 00:20:04.970 "traddr": "10.0.0.1", 00:20:04.970 "trsvcid": "46462" 00:20:04.970 }, 00:20:04.970 "auth": { 00:20:04.970 "state": "completed", 00:20:04.970 "digest": "sha256", 00:20:04.970 "dhgroup": "ffdhe8192" 00:20:04.970 } 00:20:04.970 } 00:20:04.970 ]' 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.970 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.228 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.228 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.228 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.485 20:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.420 20:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.678 20:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.641 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.641 { 00:20:07.641 "cntlid": 43, 00:20:07.641 "qid": 0, 00:20:07.641 "state": "enabled", 00:20:07.641 "listen_address": { 00:20:07.641 "trtype": "TCP", 00:20:07.641 "adrfam": "IPv4", 00:20:07.641 "traddr": "10.0.0.2", 00:20:07.641 "trsvcid": "4420" 00:20:07.641 }, 00:20:07.641 "peer_address": { 00:20:07.641 "trtype": "TCP", 00:20:07.641 "adrfam": "IPv4", 00:20:07.641 "traddr": "10.0.0.1", 00:20:07.641 "trsvcid": "46498" 00:20:07.641 }, 00:20:07.641 "auth": { 00:20:07.641 "state": "completed", 00:20:07.641 "digest": "sha256", 00:20:07.641 "dhgroup": "ffdhe8192" 00:20:07.641 } 00:20:07.641 } 00:20:07.641 ]' 00:20:07.641 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.900 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.158 20:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.096 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.354 20:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.293 00:20:10.293 20:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.293 20:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.293 20:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:10.551 { 00:20:10.551 "cntlid": 45, 00:20:10.551 "qid": 0, 00:20:10.551 "state": "enabled", 00:20:10.551 "listen_address": { 00:20:10.551 "trtype": "TCP", 00:20:10.551 "adrfam": "IPv4", 00:20:10.551 "traddr": "10.0.0.2", 00:20:10.551 "trsvcid": "4420" 00:20:10.551 }, 00:20:10.551 "peer_address": { 00:20:10.551 "trtype": "TCP", 00:20:10.551 "adrfam": "IPv4", 00:20:10.551 "traddr": "10.0.0.1", 00:20:10.551 "trsvcid": "46512" 00:20:10.551 }, 00:20:10.551 "auth": { 00:20:10.551 "state": "completed", 00:20:10.551 "digest": "sha256", 00:20:10.551 "dhgroup": "ffdhe8192" 00:20:10.551 } 00:20:10.551 } 00:20:10.551 ]' 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.551 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.810 20:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:11.748 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.008 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:12.266 20:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.204 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.204 { 00:20:13.204 "cntlid": 47, 00:20:13.204 "qid": 0, 00:20:13.204 "state": "enabled", 00:20:13.204 "listen_address": { 00:20:13.204 "trtype": "TCP", 00:20:13.204 "adrfam": "IPv4", 00:20:13.204 "traddr": "10.0.0.2", 00:20:13.204 "trsvcid": "4420" 00:20:13.204 }, 00:20:13.204 "peer_address": { 00:20:13.204 "trtype": "TCP", 00:20:13.204 "adrfam": "IPv4", 00:20:13.204 "traddr": "10.0.0.1", 00:20:13.204 "trsvcid": "53376" 00:20:13.204 }, 00:20:13.204 "auth": { 00:20:13.204 "state": "completed", 00:20:13.204 "digest": "sha256", 00:20:13.204 "dhgroup": "ffdhe8192" 00:20:13.204 } 00:20:13.204 } 00:20:13.204 ]' 00:20:13.204 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.462 20:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.721 20:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.653 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.910 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:15.168 00:20:15.168 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.169 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.169 20:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.458 { 00:20:15.458 "cntlid": 49, 00:20:15.458 "qid": 0, 00:20:15.458 "state": "enabled", 00:20:15.458 "listen_address": { 00:20:15.458 "trtype": "TCP", 00:20:15.458 "adrfam": "IPv4", 00:20:15.458 "traddr": "10.0.0.2", 00:20:15.458 "trsvcid": "4420" 00:20:15.458 }, 00:20:15.458 "peer_address": { 00:20:15.458 "trtype": "TCP", 00:20:15.458 "adrfam": "IPv4", 00:20:15.458 "traddr": "10.0.0.1", 00:20:15.458 "trsvcid": "53406" 00:20:15.458 }, 00:20:15.458 "auth": { 00:20:15.458 "state": "completed", 00:20:15.458 "digest": "sha384", 00:20:15.458 "dhgroup": "null" 00:20:15.458 } 00:20:15.458 } 00:20:15.458 ]' 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.458 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.715 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:15.715 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.715 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.715 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.716 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.974 20:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:16.909 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.167 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.424 00:20:17.424 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.424 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.424 20:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.681 { 00:20:17.681 "cntlid": 51, 00:20:17.681 "qid": 0, 00:20:17.681 "state": "enabled", 00:20:17.681 "listen_address": { 00:20:17.681 "trtype": "TCP", 00:20:17.681 "adrfam": "IPv4", 00:20:17.681 "traddr": "10.0.0.2", 00:20:17.681 "trsvcid": "4420" 00:20:17.681 }, 00:20:17.681 "peer_address": { 00:20:17.681 "trtype": "TCP", 00:20:17.681 "adrfam": "IPv4", 00:20:17.681 "traddr": "10.0.0.1", 00:20:17.681 "trsvcid": "53436" 00:20:17.681 }, 00:20:17.681 "auth": { 00:20:17.681 "state": "completed", 00:20:17.681 "digest": "sha384", 00:20:17.681 "dhgroup": "null" 00:20:17.681 } 00:20:17.681 } 00:20:17.681 ]' 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.681 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.682 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.249 20:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.185 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.443 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.444 20:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.702 00:20:19.702 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.702 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.702 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.960 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.960 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.960 20:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.960 20:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.960 20:08:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.960 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.960 { 00:20:19.960 "cntlid": 53, 00:20:19.960 "qid": 0, 00:20:19.960 "state": "enabled", 00:20:19.960 "listen_address": { 00:20:19.960 "trtype": "TCP", 00:20:19.960 "adrfam": "IPv4", 00:20:19.960 "traddr": "10.0.0.2", 00:20:19.960 "trsvcid": "4420" 00:20:19.960 }, 00:20:19.960 "peer_address": { 00:20:19.960 "trtype": "TCP", 00:20:19.960 "adrfam": "IPv4", 00:20:19.960 "traddr": "10.0.0.1", 00:20:19.960 "trsvcid": "53454" 00:20:19.961 }, 00:20:19.961 "auth": { 00:20:19.961 "state": "completed", 00:20:19.961 "digest": "sha384", 00:20:19.961 "dhgroup": "null" 00:20:19.961 } 00:20:19.961 } 00:20:19.961 ]' 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.961 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.219 20:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.156 20:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.720 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:21.978 00:20:21.978 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.978 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.978 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.235 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.235 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.235 20:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.235 20:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.235 20:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.235 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.236 { 00:20:22.236 "cntlid": 55, 00:20:22.236 "qid": 0, 00:20:22.236 "state": "enabled", 00:20:22.236 "listen_address": { 00:20:22.236 "trtype": "TCP", 00:20:22.236 "adrfam": "IPv4", 00:20:22.236 "traddr": "10.0.0.2", 00:20:22.236 "trsvcid": "4420" 00:20:22.236 }, 00:20:22.236 "peer_address": { 00:20:22.236 "trtype": "TCP", 00:20:22.236 "adrfam": "IPv4", 00:20:22.236 "traddr": "10.0.0.1", 00:20:22.236 "trsvcid": "36314" 00:20:22.236 }, 00:20:22.236 "auth": { 00:20:22.236 "state": "completed", 00:20:22.236 "digest": "sha384", 00:20:22.236 "dhgroup": "null" 00:20:22.236 } 00:20:22.236 } 00:20:22.236 ]' 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.236 20:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.493 20:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:20:23.427 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.427 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.427 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.427 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.685 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.685 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.685 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.685 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.685 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.943 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.201 00:20:24.201 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.201 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.201 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.459 { 00:20:24.459 "cntlid": 57, 00:20:24.459 "qid": 0, 00:20:24.459 "state": "enabled", 00:20:24.459 "listen_address": { 00:20:24.459 "trtype": "TCP", 00:20:24.459 "adrfam": "IPv4", 00:20:24.459 "traddr": "10.0.0.2", 00:20:24.459 "trsvcid": "4420" 00:20:24.459 }, 00:20:24.459 "peer_address": { 00:20:24.459 "trtype": "TCP", 00:20:24.459 "adrfam": "IPv4", 00:20:24.459 "traddr": "10.0.0.1", 00:20:24.459 "trsvcid": "36362" 00:20:24.459 }, 00:20:24.459 "auth": { 00:20:24.459 "state": "completed", 00:20:24.459 "digest": "sha384", 00:20:24.459 "dhgroup": "ffdhe2048" 00:20:24.459 } 00:20:24.459 } 00:20:24.459 ]' 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.459 20:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:24.459 20:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.459 20:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.459 20:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.459 20:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.717 20:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.649 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.906 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.472 00:20:26.472 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.472 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.472 20:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.731 { 00:20:26.731 "cntlid": 59, 00:20:26.731 "qid": 0, 00:20:26.731 "state": "enabled", 00:20:26.731 "listen_address": { 00:20:26.731 "trtype": "TCP", 00:20:26.731 "adrfam": "IPv4", 00:20:26.731 "traddr": "10.0.0.2", 00:20:26.731 "trsvcid": "4420" 00:20:26.731 }, 00:20:26.731 "peer_address": { 00:20:26.731 "trtype": "TCP", 00:20:26.731 "adrfam": "IPv4", 00:20:26.731 "traddr": "10.0.0.1", 00:20:26.731 "trsvcid": "36384" 00:20:26.731 }, 00:20:26.731 "auth": { 00:20:26.731 "state": "completed", 00:20:26.731 "digest": "sha384", 00:20:26.731 "dhgroup": "ffdhe2048" 00:20:26.731 } 00:20:26.731 } 00:20:26.731 ]' 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.731 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.989 20:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.924 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.182 20:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.441 00:20:28.441 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.441 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.441 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.701 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.701 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.701 20:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.701 20:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.960 { 00:20:28.960 "cntlid": 61, 00:20:28.960 "qid": 0, 00:20:28.960 "state": "enabled", 00:20:28.960 "listen_address": { 00:20:28.960 "trtype": "TCP", 00:20:28.960 "adrfam": "IPv4", 00:20:28.960 "traddr": "10.0.0.2", 00:20:28.960 "trsvcid": "4420" 00:20:28.960 }, 00:20:28.960 "peer_address": { 00:20:28.960 "trtype": "TCP", 00:20:28.960 "adrfam": "IPv4", 00:20:28.960 "traddr": "10.0.0.1", 00:20:28.960 "trsvcid": "36396" 00:20:28.960 }, 00:20:28.960 "auth": { 00:20:28.960 "state": "completed", 00:20:28.960 "digest": "sha384", 00:20:28.960 "dhgroup": "ffdhe2048" 00:20:28.960 } 00:20:28.960 } 00:20:28.960 ]' 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.960 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.218 20:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:30.156 20:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.444 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:30.702 00:20:30.702 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.702 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.702 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.960 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.961 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.961 20:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.961 20:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.961 20:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.961 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.961 { 00:20:30.961 "cntlid": 63, 00:20:30.961 "qid": 0, 00:20:30.961 "state": "enabled", 00:20:30.961 "listen_address": { 00:20:30.961 "trtype": "TCP", 00:20:30.961 "adrfam": "IPv4", 00:20:30.961 "traddr": "10.0.0.2", 00:20:30.961 "trsvcid": "4420" 00:20:30.961 }, 00:20:30.961 "peer_address": { 00:20:30.961 "trtype": "TCP", 00:20:30.961 "adrfam": "IPv4", 00:20:30.961 "traddr": "10.0.0.1", 00:20:30.961 "trsvcid": "49768" 00:20:30.961 }, 00:20:30.961 "auth": { 00:20:30.961 "state": "completed", 00:20:30.961 "digest": "sha384", 00:20:30.961 "dhgroup": "ffdhe2048" 00:20:30.961 } 00:20:30.961 } 00:20:30.961 ]' 00:20:30.961 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.219 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.478 20:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.414 20:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.673 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.930 00:20:32.930 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.930 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.930 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:33.189 { 00:20:33.189 "cntlid": 65, 00:20:33.189 "qid": 0, 00:20:33.189 "state": "enabled", 00:20:33.189 "listen_address": { 00:20:33.189 "trtype": "TCP", 00:20:33.189 "adrfam": "IPv4", 00:20:33.189 "traddr": "10.0.0.2", 00:20:33.189 "trsvcid": "4420" 00:20:33.189 }, 00:20:33.189 "peer_address": { 00:20:33.189 "trtype": "TCP", 00:20:33.189 "adrfam": "IPv4", 00:20:33.189 "traddr": "10.0.0.1", 00:20:33.189 "trsvcid": "49784" 00:20:33.189 }, 00:20:33.189 "auth": { 00:20:33.189 "state": "completed", 00:20:33.189 "digest": "sha384", 00:20:33.189 "dhgroup": "ffdhe3072" 00:20:33.189 } 00:20:33.189 } 00:20:33.189 ]' 00:20:33.189 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.447 20:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.706 20:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.644 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.902 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.160 00:20:35.160 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:35.160 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.160 20:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.418 { 00:20:35.418 "cntlid": 67, 00:20:35.418 "qid": 0, 00:20:35.418 "state": "enabled", 00:20:35.418 "listen_address": { 00:20:35.418 "trtype": "TCP", 00:20:35.418 "adrfam": "IPv4", 00:20:35.418 "traddr": "10.0.0.2", 00:20:35.418 "trsvcid": "4420" 00:20:35.418 }, 00:20:35.418 "peer_address": { 00:20:35.418 "trtype": "TCP", 00:20:35.418 "adrfam": "IPv4", 00:20:35.418 "traddr": "10.0.0.1", 00:20:35.418 "trsvcid": "49812" 00:20:35.418 }, 00:20:35.418 "auth": { 00:20:35.418 "state": "completed", 00:20:35.418 "digest": "sha384", 00:20:35.418 "dhgroup": "ffdhe3072" 00:20:35.418 } 00:20:35.418 } 00:20:35.418 ]' 00:20:35.418 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.675 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.675 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.676 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:35.676 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.676 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.676 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.676 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.934 20:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.871 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.128 20:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.696 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.696 20:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.954 { 00:20:37.954 "cntlid": 69, 00:20:37.954 "qid": 0, 00:20:37.954 "state": "enabled", 00:20:37.954 "listen_address": { 00:20:37.954 "trtype": "TCP", 00:20:37.954 "adrfam": "IPv4", 00:20:37.954 "traddr": "10.0.0.2", 00:20:37.954 "trsvcid": "4420" 00:20:37.954 }, 00:20:37.954 "peer_address": { 00:20:37.954 "trtype": "TCP", 00:20:37.954 "adrfam": "IPv4", 00:20:37.954 "traddr": "10.0.0.1", 00:20:37.954 "trsvcid": "49840" 00:20:37.954 }, 00:20:37.954 "auth": { 00:20:37.954 "state": "completed", 00:20:37.954 "digest": "sha384", 00:20:37.954 "dhgroup": "ffdhe3072" 00:20:37.954 } 00:20:37.954 } 00:20:37.954 ]' 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.954 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.211 20:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.144 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.402 20:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.968 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.968 { 00:20:39.968 "cntlid": 71, 00:20:39.968 "qid": 0, 00:20:39.968 "state": "enabled", 00:20:39.968 "listen_address": { 00:20:39.968 "trtype": "TCP", 00:20:39.968 "adrfam": "IPv4", 00:20:39.968 "traddr": "10.0.0.2", 00:20:39.968 "trsvcid": "4420" 00:20:39.968 }, 00:20:39.968 "peer_address": { 00:20:39.968 "trtype": "TCP", 00:20:39.968 "adrfam": "IPv4", 00:20:39.968 "traddr": "10.0.0.1", 00:20:39.968 "trsvcid": "49864" 00:20:39.968 }, 00:20:39.968 "auth": { 00:20:39.968 "state": "completed", 00:20:39.968 "digest": "sha384", 00:20:39.968 "dhgroup": "ffdhe3072" 00:20:39.968 } 00:20:39.968 } 00:20:39.968 ]' 00:20:39.968 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.225 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.482 20:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.417 20:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.675 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.242 00:20:42.242 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.242 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.242 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.499 { 00:20:42.499 "cntlid": 73, 00:20:42.499 "qid": 0, 00:20:42.499 "state": "enabled", 00:20:42.499 "listen_address": { 00:20:42.499 "trtype": "TCP", 00:20:42.499 "adrfam": "IPv4", 00:20:42.499 "traddr": "10.0.0.2", 00:20:42.499 "trsvcid": "4420" 00:20:42.499 }, 00:20:42.499 "peer_address": { 00:20:42.499 "trtype": "TCP", 00:20:42.499 "adrfam": "IPv4", 00:20:42.499 "traddr": "10.0.0.1", 00:20:42.499 "trsvcid": "46842" 00:20:42.499 }, 00:20:42.499 "auth": { 00:20:42.499 "state": "completed", 00:20:42.499 "digest": "sha384", 00:20:42.499 "dhgroup": "ffdhe4096" 00:20:42.499 } 00:20:42.499 } 00:20:42.499 ]' 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.499 20:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.499 20:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:42.499 20:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.499 20:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.499 20:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.499 20:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.757 20:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.692 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:43.950 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.517 00:20:44.517 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.517 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.517 20:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:44.517 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.517 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.517 20:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.517 20:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.776 { 00:20:44.776 "cntlid": 75, 00:20:44.776 "qid": 0, 00:20:44.776 "state": "enabled", 00:20:44.776 "listen_address": { 00:20:44.776 "trtype": "TCP", 00:20:44.776 "adrfam": "IPv4", 00:20:44.776 "traddr": "10.0.0.2", 00:20:44.776 "trsvcid": "4420" 00:20:44.776 }, 00:20:44.776 "peer_address": { 00:20:44.776 "trtype": "TCP", 00:20:44.776 "adrfam": "IPv4", 00:20:44.776 "traddr": "10.0.0.1", 00:20:44.776 "trsvcid": "46882" 00:20:44.776 }, 00:20:44.776 "auth": { 00:20:44.776 "state": "completed", 00:20:44.776 "digest": "sha384", 00:20:44.776 "dhgroup": "ffdhe4096" 00:20:44.776 } 00:20:44.776 } 00:20:44.776 ]' 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.776 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.035 20:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:20:46.045 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.045 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.045 20:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.045 20:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.045 20:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.045 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.046 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.046 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.303 20:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.871 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.871 { 00:20:46.871 "cntlid": 77, 00:20:46.871 "qid": 0, 00:20:46.871 "state": "enabled", 00:20:46.871 "listen_address": { 00:20:46.871 "trtype": "TCP", 00:20:46.871 "adrfam": "IPv4", 00:20:46.871 "traddr": "10.0.0.2", 00:20:46.871 "trsvcid": "4420" 00:20:46.871 }, 00:20:46.871 "peer_address": { 00:20:46.871 "trtype": "TCP", 00:20:46.871 "adrfam": "IPv4", 00:20:46.871 "traddr": "10.0.0.1", 00:20:46.871 "trsvcid": "46896" 00:20:46.871 }, 00:20:46.871 "auth": { 00:20:46.871 "state": "completed", 00:20:46.871 "digest": "sha384", 00:20:46.871 "dhgroup": "ffdhe4096" 00:20:46.871 } 00:20:46.871 } 00:20:46.871 ]' 00:20:46.871 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.129 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.388 20:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.325 20:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.584 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:49.152 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.152 { 00:20:49.152 "cntlid": 79, 00:20:49.152 "qid": 0, 00:20:49.152 "state": "enabled", 00:20:49.152 "listen_address": { 00:20:49.152 "trtype": "TCP", 00:20:49.152 "adrfam": "IPv4", 00:20:49.152 "traddr": "10.0.0.2", 00:20:49.152 "trsvcid": "4420" 00:20:49.152 }, 00:20:49.152 "peer_address": { 00:20:49.152 "trtype": "TCP", 00:20:49.152 "adrfam": "IPv4", 00:20:49.152 "traddr": "10.0.0.1", 00:20:49.152 "trsvcid": "46926" 00:20:49.152 }, 00:20:49.152 "auth": { 00:20:49.152 "state": "completed", 00:20:49.152 "digest": "sha384", 00:20:49.152 "dhgroup": "ffdhe4096" 00:20:49.152 } 00:20:49.152 } 00:20:49.152 ]' 00:20:49.152 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.411 20:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.670 20:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.608 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.866 20:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.434 00:20:51.434 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.434 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.434 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.691 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.691 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.691 20:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.691 20:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.691 20:08:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.691 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.691 { 00:20:51.691 "cntlid": 81, 00:20:51.691 "qid": 0, 00:20:51.691 "state": "enabled", 00:20:51.691 "listen_address": { 00:20:51.691 "trtype": "TCP", 00:20:51.691 "adrfam": "IPv4", 00:20:51.691 "traddr": "10.0.0.2", 00:20:51.691 "trsvcid": "4420" 00:20:51.691 }, 00:20:51.691 "peer_address": { 00:20:51.691 "trtype": "TCP", 00:20:51.691 "adrfam": "IPv4", 00:20:51.691 "traddr": "10.0.0.1", 00:20:51.691 "trsvcid": "53374" 00:20:51.691 }, 00:20:51.691 "auth": { 00:20:51.691 "state": "completed", 00:20:51.692 "digest": "sha384", 00:20:51.692 "dhgroup": "ffdhe6144" 00:20:51.692 } 00:20:51.692 } 00:20:51.692 ]' 00:20:51.692 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.692 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.692 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.692 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.692 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.949 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.949 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.949 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.207 20:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.143 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.402 20:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.970 00:20:53.970 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.970 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.970 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.228 { 00:20:54.228 "cntlid": 83, 00:20:54.228 "qid": 0, 00:20:54.228 "state": "enabled", 00:20:54.228 "listen_address": { 00:20:54.228 "trtype": "TCP", 00:20:54.228 "adrfam": "IPv4", 00:20:54.228 "traddr": "10.0.0.2", 00:20:54.228 "trsvcid": "4420" 00:20:54.228 }, 00:20:54.228 "peer_address": { 00:20:54.228 "trtype": "TCP", 00:20:54.228 "adrfam": "IPv4", 00:20:54.228 "traddr": "10.0.0.1", 00:20:54.228 "trsvcid": "53394" 00:20:54.228 }, 00:20:54.228 "auth": { 00:20:54.228 "state": "completed", 00:20:54.228 "digest": "sha384", 00:20:54.228 "dhgroup": "ffdhe6144" 00:20:54.228 } 00:20:54.228 } 00:20:54.228 ]' 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.228 20:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.488 20:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.865 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.433 00:20:56.433 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.433 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.433 20:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.693 { 00:20:56.693 "cntlid": 85, 00:20:56.693 "qid": 0, 00:20:56.693 "state": "enabled", 00:20:56.693 "listen_address": { 00:20:56.693 "trtype": "TCP", 00:20:56.693 "adrfam": "IPv4", 00:20:56.693 "traddr": "10.0.0.2", 00:20:56.693 "trsvcid": "4420" 00:20:56.693 }, 00:20:56.693 "peer_address": { 00:20:56.693 "trtype": "TCP", 00:20:56.693 "adrfam": "IPv4", 00:20:56.693 "traddr": "10.0.0.1", 00:20:56.693 "trsvcid": "53418" 00:20:56.693 }, 00:20:56.693 "auth": { 00:20:56.693 "state": "completed", 00:20:56.693 "digest": "sha384", 00:20:56.693 "dhgroup": "ffdhe6144" 00:20:56.693 } 00:20:56.693 } 00:20:56.693 ]' 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.693 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.953 20:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:20:57.889 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.149 20:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.408 20:08:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.408 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.408 20:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:58.978 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.978 { 00:20:58.978 "cntlid": 87, 00:20:58.978 "qid": 0, 00:20:58.978 "state": "enabled", 00:20:58.978 "listen_address": { 00:20:58.978 "trtype": "TCP", 00:20:58.978 "adrfam": "IPv4", 00:20:58.978 "traddr": "10.0.0.2", 00:20:58.978 "trsvcid": "4420" 00:20:58.978 }, 00:20:58.978 "peer_address": { 00:20:58.978 "trtype": "TCP", 00:20:58.978 "adrfam": "IPv4", 00:20:58.978 "traddr": "10.0.0.1", 00:20:58.978 "trsvcid": "53442" 00:20:58.978 }, 00:20:58.978 "auth": { 00:20:58.978 "state": "completed", 00:20:58.978 "digest": "sha384", 00:20:58.978 "dhgroup": "ffdhe6144" 00:20:58.978 } 00:20:58.978 } 00:20:58.978 ]' 00:20:58.978 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.236 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.494 20:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.432 20:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.690 20:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.711 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.711 { 00:21:01.711 "cntlid": 89, 00:21:01.711 "qid": 0, 00:21:01.711 "state": "enabled", 00:21:01.711 "listen_address": { 00:21:01.711 "trtype": "TCP", 00:21:01.711 "adrfam": "IPv4", 00:21:01.711 "traddr": "10.0.0.2", 00:21:01.711 "trsvcid": "4420" 00:21:01.711 }, 00:21:01.711 "peer_address": { 00:21:01.711 "trtype": "TCP", 00:21:01.711 "adrfam": "IPv4", 00:21:01.711 "traddr": "10.0.0.1", 00:21:01.711 "trsvcid": "46198" 00:21:01.711 }, 00:21:01.711 "auth": { 00:21:01.711 "state": "completed", 00:21:01.711 "digest": "sha384", 00:21:01.711 "dhgroup": "ffdhe8192" 00:21:01.711 } 00:21:01.711 } 00:21:01.711 ]' 00:21:01.711 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.969 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.227 20:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.160 20:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.418 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.355 00:21:04.355 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:04.355 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:04.355 20:08:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:04.613 { 00:21:04.613 "cntlid": 91, 00:21:04.613 "qid": 0, 00:21:04.613 "state": "enabled", 00:21:04.613 "listen_address": { 00:21:04.613 "trtype": "TCP", 00:21:04.613 "adrfam": "IPv4", 00:21:04.613 "traddr": "10.0.0.2", 00:21:04.613 "trsvcid": "4420" 00:21:04.613 }, 00:21:04.613 "peer_address": { 00:21:04.613 "trtype": "TCP", 00:21:04.613 "adrfam": "IPv4", 00:21:04.613 "traddr": "10.0.0.1", 00:21:04.613 "trsvcid": "46230" 00:21:04.613 }, 00:21:04.613 "auth": { 00:21:04.613 "state": "completed", 00:21:04.613 "digest": "sha384", 00:21:04.613 "dhgroup": "ffdhe8192" 00:21:04.613 } 00:21:04.613 } 00:21:04.613 ]' 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.613 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:04.871 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.871 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:04.871 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.871 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.871 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.128 20:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.063 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.320 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.321 20:08:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.259 00:21:07.259 20:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.259 20:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.259 20:08:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.517 { 00:21:07.517 "cntlid": 93, 00:21:07.517 "qid": 0, 00:21:07.517 "state": "enabled", 00:21:07.517 "listen_address": { 00:21:07.517 "trtype": "TCP", 00:21:07.517 "adrfam": "IPv4", 00:21:07.517 "traddr": "10.0.0.2", 00:21:07.517 "trsvcid": "4420" 00:21:07.517 }, 00:21:07.517 "peer_address": { 00:21:07.517 "trtype": "TCP", 00:21:07.517 "adrfam": "IPv4", 00:21:07.517 "traddr": "10.0.0.1", 00:21:07.517 "trsvcid": "46274" 00:21:07.517 }, 00:21:07.517 "auth": { 00:21:07.517 "state": "completed", 00:21:07.517 "digest": "sha384", 00:21:07.517 "dhgroup": "ffdhe8192" 00:21:07.517 } 00:21:07.517 } 00:21:07.517 ]' 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.517 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.776 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.776 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.776 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.776 20:08:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:21:08.708 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.708 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.708 20:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.708 20:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:08.966 20:08:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:09.903 00:21:09.903 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:09.903 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:09.903 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.160 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.160 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.160 20:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.160 20:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.160 20:08:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.160 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.160 { 00:21:10.160 "cntlid": 95, 00:21:10.160 "qid": 0, 00:21:10.160 "state": "enabled", 00:21:10.160 "listen_address": { 00:21:10.160 "trtype": "TCP", 00:21:10.160 "adrfam": "IPv4", 00:21:10.160 "traddr": "10.0.0.2", 00:21:10.160 "trsvcid": "4420" 00:21:10.160 }, 00:21:10.160 "peer_address": { 00:21:10.160 "trtype": "TCP", 00:21:10.160 "adrfam": "IPv4", 00:21:10.160 "traddr": "10.0.0.1", 00:21:10.160 "trsvcid": "46298" 00:21:10.160 }, 00:21:10.160 "auth": { 00:21:10.160 "state": "completed", 00:21:10.160 "digest": "sha384", 00:21:10.160 "dhgroup": "ffdhe8192" 00:21:10.161 } 00:21:10.161 } 00:21:10.161 ]' 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.161 20:08:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.419 20:08:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:11.355 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.613 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.873 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.131 00:21:12.131 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.131 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.131 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.389 { 00:21:12.389 "cntlid": 97, 00:21:12.389 "qid": 0, 00:21:12.389 "state": "enabled", 00:21:12.389 "listen_address": { 00:21:12.389 "trtype": "TCP", 00:21:12.389 "adrfam": "IPv4", 00:21:12.389 "traddr": "10.0.0.2", 00:21:12.389 "trsvcid": "4420" 00:21:12.389 }, 00:21:12.389 "peer_address": { 00:21:12.389 "trtype": "TCP", 00:21:12.389 "adrfam": "IPv4", 00:21:12.389 "traddr": "10.0.0.1", 00:21:12.389 "trsvcid": "40478" 00:21:12.389 }, 00:21:12.389 "auth": { 00:21:12.389 "state": "completed", 00:21:12.389 "digest": "sha512", 00:21:12.389 "dhgroup": "null" 00:21:12.389 } 00:21:12.389 } 00:21:12.389 ]' 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.389 20:08:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.647 20:09:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.581 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.838 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.098 00:21:14.357 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.357 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:14.357 20:09:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:14.616 { 00:21:14.616 "cntlid": 99, 00:21:14.616 "qid": 0, 00:21:14.616 "state": "enabled", 00:21:14.616 "listen_address": { 00:21:14.616 "trtype": "TCP", 00:21:14.616 "adrfam": "IPv4", 00:21:14.616 "traddr": "10.0.0.2", 00:21:14.616 "trsvcid": "4420" 00:21:14.616 }, 00:21:14.616 "peer_address": { 00:21:14.616 "trtype": "TCP", 00:21:14.616 "adrfam": "IPv4", 00:21:14.616 "traddr": "10.0.0.1", 00:21:14.616 "trsvcid": "40500" 00:21:14.616 }, 00:21:14.616 "auth": { 00:21:14.616 "state": "completed", 00:21:14.616 "digest": "sha512", 00:21:14.616 "dhgroup": "null" 00:21:14.616 } 00:21:14.616 } 00:21:14.616 ]' 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.616 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.875 20:09:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.811 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.068 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.327 00:21:16.327 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:16.327 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:16.327 20:09:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.617 { 00:21:16.617 "cntlid": 101, 00:21:16.617 "qid": 0, 00:21:16.617 "state": "enabled", 00:21:16.617 "listen_address": { 00:21:16.617 "trtype": "TCP", 00:21:16.617 "adrfam": "IPv4", 00:21:16.617 "traddr": "10.0.0.2", 00:21:16.617 "trsvcid": "4420" 00:21:16.617 }, 00:21:16.617 "peer_address": { 00:21:16.617 "trtype": "TCP", 00:21:16.617 "adrfam": "IPv4", 00:21:16.617 "traddr": "10.0.0.1", 00:21:16.617 "trsvcid": "40528" 00:21:16.617 }, 00:21:16.617 "auth": { 00:21:16.617 "state": "completed", 00:21:16.617 "digest": "sha512", 00:21:16.617 "dhgroup": "null" 00:21:16.617 } 00:21:16.617 } 00:21:16.617 ]' 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.617 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.877 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:16.877 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.877 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.877 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.877 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.137 20:09:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.075 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.333 20:09:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:18.590 00:21:18.590 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.590 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.590 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.848 { 00:21:18.848 "cntlid": 103, 00:21:18.848 "qid": 0, 00:21:18.848 "state": "enabled", 00:21:18.848 "listen_address": { 00:21:18.848 "trtype": "TCP", 00:21:18.848 "adrfam": "IPv4", 00:21:18.848 "traddr": "10.0.0.2", 00:21:18.848 "trsvcid": "4420" 00:21:18.848 }, 00:21:18.848 "peer_address": { 00:21:18.848 "trtype": "TCP", 00:21:18.848 "adrfam": "IPv4", 00:21:18.848 "traddr": "10.0.0.1", 00:21:18.848 "trsvcid": "40558" 00:21:18.848 }, 00:21:18.848 "auth": { 00:21:18.848 "state": "completed", 00:21:18.848 "digest": "sha512", 00:21:18.848 "dhgroup": "null" 00:21:18.848 } 00:21:18.848 } 00:21:18.848 ]' 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:18.848 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.107 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.107 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.107 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.365 20:09:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.297 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.555 20:09:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.813 00:21:20.813 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.813 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.813 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.069 { 00:21:21.069 "cntlid": 105, 00:21:21.069 "qid": 0, 00:21:21.069 "state": "enabled", 00:21:21.069 "listen_address": { 00:21:21.069 "trtype": "TCP", 00:21:21.069 "adrfam": "IPv4", 00:21:21.069 "traddr": "10.0.0.2", 00:21:21.069 "trsvcid": "4420" 00:21:21.069 }, 00:21:21.069 "peer_address": { 00:21:21.069 "trtype": "TCP", 00:21:21.069 "adrfam": "IPv4", 00:21:21.069 "traddr": "10.0.0.1", 00:21:21.069 "trsvcid": "40084" 00:21:21.069 }, 00:21:21.069 "auth": { 00:21:21.069 "state": "completed", 00:21:21.069 "digest": "sha512", 00:21:21.069 "dhgroup": "ffdhe2048" 00:21:21.069 } 00:21:21.069 } 00:21:21.069 ]' 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.069 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.070 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.070 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.070 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.070 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.070 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.327 20:09:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.259 20:09:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.518 20:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.777 20:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.777 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.777 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.036 00:21:23.036 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.036 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.036 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.294 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.294 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.294 20:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.294 20:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.294 20:09:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.294 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.294 { 00:21:23.294 "cntlid": 107, 00:21:23.294 "qid": 0, 00:21:23.294 "state": "enabled", 00:21:23.294 "listen_address": { 00:21:23.294 "trtype": "TCP", 00:21:23.294 "adrfam": "IPv4", 00:21:23.294 "traddr": "10.0.0.2", 00:21:23.294 "trsvcid": "4420" 00:21:23.294 }, 00:21:23.294 "peer_address": { 00:21:23.294 "trtype": "TCP", 00:21:23.294 "adrfam": "IPv4", 00:21:23.294 "traddr": "10.0.0.1", 00:21:23.294 "trsvcid": "40116" 00:21:23.294 }, 00:21:23.294 "auth": { 00:21:23.294 "state": "completed", 00:21:23.294 "digest": "sha512", 00:21:23.294 "dhgroup": "ffdhe2048" 00:21:23.294 } 00:21:23.294 } 00:21:23.295 ]' 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.295 20:09:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.552 20:09:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.486 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.486 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.744 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.002 00:21:25.002 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:25.002 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:25.002 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:25.260 { 00:21:25.260 "cntlid": 109, 00:21:25.260 "qid": 0, 00:21:25.260 "state": "enabled", 00:21:25.260 "listen_address": { 00:21:25.260 "trtype": "TCP", 00:21:25.260 "adrfam": "IPv4", 00:21:25.260 "traddr": "10.0.0.2", 00:21:25.260 "trsvcid": "4420" 00:21:25.260 }, 00:21:25.260 "peer_address": { 00:21:25.260 "trtype": "TCP", 00:21:25.260 "adrfam": "IPv4", 00:21:25.260 "traddr": "10.0.0.1", 00:21:25.260 "trsvcid": "40144" 00:21:25.260 }, 00:21:25.260 "auth": { 00:21:25.260 "state": "completed", 00:21:25.260 "digest": "sha512", 00:21:25.260 "dhgroup": "ffdhe2048" 00:21:25.260 } 00:21:25.260 } 00:21:25.260 ]' 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.260 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:25.518 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.518 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:25.518 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.518 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.518 20:09:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.777 20:09:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:26.719 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:26.997 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:27.255 00:21:27.255 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:27.255 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:27.255 20:09:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.513 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.513 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.513 20:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.513 20:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.513 20:09:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.513 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.513 { 00:21:27.513 "cntlid": 111, 00:21:27.513 "qid": 0, 00:21:27.513 "state": "enabled", 00:21:27.513 "listen_address": { 00:21:27.513 "trtype": "TCP", 00:21:27.513 "adrfam": "IPv4", 00:21:27.513 "traddr": "10.0.0.2", 00:21:27.513 "trsvcid": "4420" 00:21:27.513 }, 00:21:27.513 "peer_address": { 00:21:27.513 "trtype": "TCP", 00:21:27.513 "adrfam": "IPv4", 00:21:27.513 "traddr": "10.0.0.1", 00:21:27.513 "trsvcid": "40180" 00:21:27.513 }, 00:21:27.513 "auth": { 00:21:27.514 "state": "completed", 00:21:27.514 "digest": "sha512", 00:21:27.514 "dhgroup": "ffdhe2048" 00:21:27.514 } 00:21:27.514 } 00:21:27.514 ]' 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.514 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.773 20:09:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.709 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.967 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.536 00:21:29.536 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.536 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.536 20:09:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.795 { 00:21:29.795 "cntlid": 113, 00:21:29.795 "qid": 0, 00:21:29.795 "state": "enabled", 00:21:29.795 "listen_address": { 00:21:29.795 "trtype": "TCP", 00:21:29.795 "adrfam": "IPv4", 00:21:29.795 "traddr": "10.0.0.2", 00:21:29.795 "trsvcid": "4420" 00:21:29.795 }, 00:21:29.795 "peer_address": { 00:21:29.795 "trtype": "TCP", 00:21:29.795 "adrfam": "IPv4", 00:21:29.795 "traddr": "10.0.0.1", 00:21:29.795 "trsvcid": "40216" 00:21:29.795 }, 00:21:29.795 "auth": { 00:21:29.795 "state": "completed", 00:21:29.795 "digest": "sha512", 00:21:29.795 "dhgroup": "ffdhe3072" 00:21:29.795 } 00:21:29.795 } 00:21:29.795 ]' 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.795 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.054 20:09:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.990 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.250 20:09:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.848 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.848 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.848 { 00:21:31.848 "cntlid": 115, 00:21:31.848 "qid": 0, 00:21:31.848 "state": "enabled", 00:21:31.848 "listen_address": { 00:21:31.848 "trtype": "TCP", 00:21:31.848 "adrfam": "IPv4", 00:21:31.848 "traddr": "10.0.0.2", 00:21:31.848 "trsvcid": "4420" 00:21:31.848 }, 00:21:31.848 "peer_address": { 00:21:31.848 "trtype": "TCP", 00:21:31.849 "adrfam": "IPv4", 00:21:31.849 "traddr": "10.0.0.1", 00:21:31.849 "trsvcid": "38348" 00:21:31.849 }, 00:21:31.849 "auth": { 00:21:31.849 "state": "completed", 00:21:31.849 "digest": "sha512", 00:21:31.849 "dhgroup": "ffdhe3072" 00:21:31.849 } 00:21:31.849 } 00:21:31.849 ]' 00:21:31.849 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.107 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.365 20:09:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.298 20:09:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.556 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.814 00:21:33.814 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.814 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.814 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.071 { 00:21:34.071 "cntlid": 117, 00:21:34.071 "qid": 0, 00:21:34.071 "state": "enabled", 00:21:34.071 "listen_address": { 00:21:34.071 "trtype": "TCP", 00:21:34.071 "adrfam": "IPv4", 00:21:34.071 "traddr": "10.0.0.2", 00:21:34.071 "trsvcid": "4420" 00:21:34.071 }, 00:21:34.071 "peer_address": { 00:21:34.071 "trtype": "TCP", 00:21:34.071 "adrfam": "IPv4", 00:21:34.071 "traddr": "10.0.0.1", 00:21:34.071 "trsvcid": "38378" 00:21:34.071 }, 00:21:34.071 "auth": { 00:21:34.071 "state": "completed", 00:21:34.071 "digest": "sha512", 00:21:34.071 "dhgroup": "ffdhe3072" 00:21:34.071 } 00:21:34.071 } 00:21:34.071 ]' 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.071 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.329 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.329 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.329 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.329 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.329 20:09:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.586 20:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:21:35.519 20:09:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.519 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.776 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.343 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:36.343 { 00:21:36.343 "cntlid": 119, 00:21:36.343 "qid": 0, 00:21:36.343 "state": "enabled", 00:21:36.343 "listen_address": { 00:21:36.343 "trtype": "TCP", 00:21:36.343 "adrfam": "IPv4", 00:21:36.343 "traddr": "10.0.0.2", 00:21:36.343 "trsvcid": "4420" 00:21:36.343 }, 00:21:36.343 "peer_address": { 00:21:36.343 "trtype": "TCP", 00:21:36.343 "adrfam": "IPv4", 00:21:36.343 "traddr": "10.0.0.1", 00:21:36.343 "trsvcid": "38400" 00:21:36.343 }, 00:21:36.343 "auth": { 00:21:36.343 "state": "completed", 00:21:36.343 "digest": "sha512", 00:21:36.343 "dhgroup": "ffdhe3072" 00:21:36.343 } 00:21:36.343 } 00:21:36.343 ]' 00:21:36.343 20:09:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.602 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.860 20:09:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.796 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.054 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.312 00:21:38.312 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:38.312 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:38.312 20:09:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.570 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.571 { 00:21:38.571 "cntlid": 121, 00:21:38.571 "qid": 0, 00:21:38.571 "state": "enabled", 00:21:38.571 "listen_address": { 00:21:38.571 "trtype": "TCP", 00:21:38.571 "adrfam": "IPv4", 00:21:38.571 "traddr": "10.0.0.2", 00:21:38.571 "trsvcid": "4420" 00:21:38.571 }, 00:21:38.571 "peer_address": { 00:21:38.571 "trtype": "TCP", 00:21:38.571 "adrfam": "IPv4", 00:21:38.571 "traddr": "10.0.0.1", 00:21:38.571 "trsvcid": "38434" 00:21:38.571 }, 00:21:38.571 "auth": { 00:21:38.571 "state": "completed", 00:21:38.571 "digest": "sha512", 00:21:38.571 "dhgroup": "ffdhe4096" 00:21:38.571 } 00:21:38.571 } 00:21:38.571 ]' 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.571 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.829 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:38.829 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.829 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.829 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.829 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.086 20:09:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.024 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.283 20:09:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.849 00:21:40.849 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.849 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.850 { 00:21:40.850 "cntlid": 123, 00:21:40.850 "qid": 0, 00:21:40.850 "state": "enabled", 00:21:40.850 "listen_address": { 00:21:40.850 "trtype": "TCP", 00:21:40.850 "adrfam": "IPv4", 00:21:40.850 "traddr": "10.0.0.2", 00:21:40.850 "trsvcid": "4420" 00:21:40.850 }, 00:21:40.850 "peer_address": { 00:21:40.850 "trtype": "TCP", 00:21:40.850 "adrfam": "IPv4", 00:21:40.850 "traddr": "10.0.0.1", 00:21:40.850 "trsvcid": "51376" 00:21:40.850 }, 00:21:40.850 "auth": { 00:21:40.850 "state": "completed", 00:21:40.850 "digest": "sha512", 00:21:40.850 "dhgroup": "ffdhe4096" 00:21:40.850 } 00:21:40.850 } 00:21:40.850 ]' 00:21:40.850 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.108 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.366 20:09:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.300 20:09:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.558 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.125 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.125 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:43.125 { 00:21:43.126 "cntlid": 125, 00:21:43.126 "qid": 0, 00:21:43.126 "state": "enabled", 00:21:43.126 "listen_address": { 00:21:43.126 "trtype": "TCP", 00:21:43.126 "adrfam": "IPv4", 00:21:43.126 "traddr": "10.0.0.2", 00:21:43.126 "trsvcid": "4420" 00:21:43.126 }, 00:21:43.126 "peer_address": { 00:21:43.126 "trtype": "TCP", 00:21:43.126 "adrfam": "IPv4", 00:21:43.126 "traddr": "10.0.0.1", 00:21:43.126 "trsvcid": "51408" 00:21:43.126 }, 00:21:43.126 "auth": { 00:21:43.126 "state": "completed", 00:21:43.126 "digest": "sha512", 00:21:43.126 "dhgroup": "ffdhe4096" 00:21:43.126 } 00:21:43.126 } 00:21:43.126 ]' 00:21:43.126 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.384 20:09:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.641 20:09:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.579 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.837 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.095 00:21:45.095 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.095 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.095 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.352 { 00:21:45.352 "cntlid": 127, 00:21:45.352 "qid": 0, 00:21:45.352 "state": "enabled", 00:21:45.352 "listen_address": { 00:21:45.352 "trtype": "TCP", 00:21:45.352 "adrfam": "IPv4", 00:21:45.352 "traddr": "10.0.0.2", 00:21:45.352 "trsvcid": "4420" 00:21:45.352 }, 00:21:45.352 "peer_address": { 00:21:45.352 "trtype": "TCP", 00:21:45.352 "adrfam": "IPv4", 00:21:45.352 "traddr": "10.0.0.1", 00:21:45.352 "trsvcid": "51452" 00:21:45.352 }, 00:21:45.352 "auth": { 00:21:45.352 "state": "completed", 00:21:45.352 "digest": "sha512", 00:21:45.352 "dhgroup": "ffdhe4096" 00:21:45.352 } 00:21:45.352 } 00:21:45.352 ]' 00:21:45.352 20:09:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.610 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.903 20:09:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.849 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.107 20:09:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.687 00:21:47.687 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.687 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.687 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.946 { 00:21:47.946 "cntlid": 129, 00:21:47.946 "qid": 0, 00:21:47.946 "state": "enabled", 00:21:47.946 "listen_address": { 00:21:47.946 "trtype": "TCP", 00:21:47.946 "adrfam": "IPv4", 00:21:47.946 "traddr": "10.0.0.2", 00:21:47.946 "trsvcid": "4420" 00:21:47.946 }, 00:21:47.946 "peer_address": { 00:21:47.946 "trtype": "TCP", 00:21:47.946 "adrfam": "IPv4", 00:21:47.946 "traddr": "10.0.0.1", 00:21:47.946 "trsvcid": "51480" 00:21:47.946 }, 00:21:47.946 "auth": { 00:21:47.946 "state": "completed", 00:21:47.946 "digest": "sha512", 00:21:47.946 "dhgroup": "ffdhe6144" 00:21:47.946 } 00:21:47.946 } 00:21:47.946 ]' 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.946 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.205 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.205 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.205 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.464 20:09:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.401 20:09:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.659 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.228 00:21:50.228 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:50.228 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:50.228 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.486 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.486 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.486 20:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.486 20:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.486 20:09:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.486 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:50.486 { 00:21:50.486 "cntlid": 131, 00:21:50.486 "qid": 0, 00:21:50.486 "state": "enabled", 00:21:50.486 "listen_address": { 00:21:50.486 "trtype": "TCP", 00:21:50.486 "adrfam": "IPv4", 00:21:50.486 "traddr": "10.0.0.2", 00:21:50.486 "trsvcid": "4420" 00:21:50.486 }, 00:21:50.486 "peer_address": { 00:21:50.487 "trtype": "TCP", 00:21:50.487 "adrfam": "IPv4", 00:21:50.487 "traddr": "10.0.0.1", 00:21:50.487 "trsvcid": "51502" 00:21:50.487 }, 00:21:50.487 "auth": { 00:21:50.487 "state": "completed", 00:21:50.487 "digest": "sha512", 00:21:50.487 "dhgroup": "ffdhe6144" 00:21:50.487 } 00:21:50.487 } 00:21:50.487 ]' 00:21:50.487 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:50.487 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.487 20:09:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:50.487 20:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.487 20:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:50.487 20:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.487 20:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.487 20:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.745 20:09:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.682 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.941 20:09:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.507 00:21:52.507 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:52.507 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:52.507 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.765 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.765 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.765 20:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.765 20:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.765 20:09:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.765 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.765 { 00:21:52.765 "cntlid": 133, 00:21:52.765 "qid": 0, 00:21:52.765 "state": "enabled", 00:21:52.765 "listen_address": { 00:21:52.765 "trtype": "TCP", 00:21:52.765 "adrfam": "IPv4", 00:21:52.765 "traddr": "10.0.0.2", 00:21:52.765 "trsvcid": "4420" 00:21:52.765 }, 00:21:52.765 "peer_address": { 00:21:52.765 "trtype": "TCP", 00:21:52.765 "adrfam": "IPv4", 00:21:52.765 "traddr": "10.0.0.1", 00:21:52.765 "trsvcid": "41020" 00:21:52.765 }, 00:21:52.765 "auth": { 00:21:52.765 "state": "completed", 00:21:52.765 "digest": "sha512", 00:21:52.766 "dhgroup": "ffdhe6144" 00:21:52.766 } 00:21:52.766 } 00:21:52.766 ]' 00:21:52.766 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.024 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.282 20:09:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.221 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.480 20:09:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:55.046 00:21:55.046 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:55.046 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:55.046 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:55.310 { 00:21:55.310 "cntlid": 135, 00:21:55.310 "qid": 0, 00:21:55.310 "state": "enabled", 00:21:55.310 "listen_address": { 00:21:55.310 "trtype": "TCP", 00:21:55.310 "adrfam": "IPv4", 00:21:55.310 "traddr": "10.0.0.2", 00:21:55.310 "trsvcid": "4420" 00:21:55.310 }, 00:21:55.310 "peer_address": { 00:21:55.310 "trtype": "TCP", 00:21:55.310 "adrfam": "IPv4", 00:21:55.310 "traddr": "10.0.0.1", 00:21:55.310 "trsvcid": "41038" 00:21:55.310 }, 00:21:55.310 "auth": { 00:21:55.310 "state": "completed", 00:21:55.310 "digest": "sha512", 00:21:55.310 "dhgroup": "ffdhe6144" 00:21:55.310 } 00:21:55.310 } 00:21:55.310 ]' 00:21:55.310 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.311 20:09:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.883 20:09:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:56.821 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.078 20:09:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:58.015 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:58.015 { 00:21:58.015 "cntlid": 137, 00:21:58.015 "qid": 0, 00:21:58.015 "state": "enabled", 00:21:58.015 "listen_address": { 00:21:58.015 "trtype": "TCP", 00:21:58.015 "adrfam": "IPv4", 00:21:58.015 "traddr": "10.0.0.2", 00:21:58.015 "trsvcid": "4420" 00:21:58.015 }, 00:21:58.015 "peer_address": { 00:21:58.015 "trtype": "TCP", 00:21:58.015 "adrfam": "IPv4", 00:21:58.015 "traddr": "10.0.0.1", 00:21:58.015 "trsvcid": "41048" 00:21:58.015 }, 00:21:58.015 "auth": { 00:21:58.015 "state": "completed", 00:21:58.015 "digest": "sha512", 00:21:58.015 "dhgroup": "ffdhe8192" 00:21:58.015 } 00:21:58.015 } 00:21:58.015 ]' 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:58.015 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.274 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:58.274 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.274 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:58.274 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.274 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.274 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.547 20:09:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.484 20:09:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.742 20:09:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.679 00:22:00.679 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:00.679 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:00.679 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:00.938 { 00:22:00.938 "cntlid": 139, 00:22:00.938 "qid": 0, 00:22:00.938 "state": "enabled", 00:22:00.938 "listen_address": { 00:22:00.938 "trtype": "TCP", 00:22:00.938 "adrfam": "IPv4", 00:22:00.938 "traddr": "10.0.0.2", 00:22:00.938 "trsvcid": "4420" 00:22:00.938 }, 00:22:00.938 "peer_address": { 00:22:00.938 "trtype": "TCP", 00:22:00.938 "adrfam": "IPv4", 00:22:00.938 "traddr": "10.0.0.1", 00:22:00.938 "trsvcid": "41082" 00:22:00.938 }, 00:22:00.938 "auth": { 00:22:00.938 "state": "completed", 00:22:00.938 "digest": "sha512", 00:22:00.938 "dhgroup": "ffdhe8192" 00:22:00.938 } 00:22:00.938 } 00:22:00.938 ]' 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.938 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.197 20:09:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTM2MDZmNTkyNTUzMGQ1ZTk1YTJkYmU5YTE3ZDc0M2QkLqrX: --dhchap-ctrl-secret DHHC-1:02:YjFhZDg5ODg5ZGJlMWY0ZGM5MDk3YTYzM2FjYzM3NzJiZTQwYjg1ZTRiMDJjOWI5IAnCAg==: 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.166 20:09:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.424 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.361 00:22:03.361 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:03.361 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:03.361 20:09:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.618 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.618 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.618 20:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.618 20:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.618 20:09:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.618 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:03.618 { 00:22:03.618 "cntlid": 141, 00:22:03.618 "qid": 0, 00:22:03.618 "state": "enabled", 00:22:03.618 "listen_address": { 00:22:03.618 "trtype": "TCP", 00:22:03.618 "adrfam": "IPv4", 00:22:03.618 "traddr": "10.0.0.2", 00:22:03.618 "trsvcid": "4420" 00:22:03.618 }, 00:22:03.618 "peer_address": { 00:22:03.618 "trtype": "TCP", 00:22:03.618 "adrfam": "IPv4", 00:22:03.618 "traddr": "10.0.0.1", 00:22:03.618 "trsvcid": "33418" 00:22:03.618 }, 00:22:03.618 "auth": { 00:22:03.618 "state": "completed", 00:22:03.618 "digest": "sha512", 00:22:03.618 "dhgroup": "ffdhe8192" 00:22:03.619 } 00:22:03.619 } 00:22:03.619 ]' 00:22:03.619 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:03.619 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:03.619 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:03.876 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.876 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:03.876 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.876 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.876 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.134 20:09:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NzNiNzI4M2NmZTNiNmMxMGVhYjkzYzdlZDVkMDgyNTI1MWU2ZmZmZDk0MzEyMzc0YIk+RA==: --dhchap-ctrl-secret DHHC-1:01:ZDE0YzBiNzQ5YTk0OWNkMDNkYzNhYzFiYjQ3YzBiMGG4HXA3: 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.070 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:05.328 20:09:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:06.262 00:22:06.262 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:06.262 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:06.262 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.262 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:06.520 { 00:22:06.520 "cntlid": 143, 00:22:06.520 "qid": 0, 00:22:06.520 "state": "enabled", 00:22:06.520 "listen_address": { 00:22:06.520 "trtype": "TCP", 00:22:06.520 "adrfam": "IPv4", 00:22:06.520 "traddr": "10.0.0.2", 00:22:06.520 "trsvcid": "4420" 00:22:06.520 }, 00:22:06.520 "peer_address": { 00:22:06.520 "trtype": "TCP", 00:22:06.520 "adrfam": "IPv4", 00:22:06.520 "traddr": "10.0.0.1", 00:22:06.520 "trsvcid": "33454" 00:22:06.520 }, 00:22:06.520 "auth": { 00:22:06.520 "state": "completed", 00:22:06.520 "digest": "sha512", 00:22:06.520 "dhgroup": "ffdhe8192" 00:22:06.520 } 00:22:06.520 } 00:22:06.520 ]' 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:06.520 20:09:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:06.520 20:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.520 20:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:06.520 20:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.520 20:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.520 20:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.778 20:09:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.715 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.973 20:09:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.909 00:22:08.909 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:08.909 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:08.909 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.167 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.167 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.167 20:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.167 20:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.167 20:09:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.167 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:09.167 { 00:22:09.167 "cntlid": 145, 00:22:09.167 "qid": 0, 00:22:09.167 "state": "enabled", 00:22:09.167 "listen_address": { 00:22:09.167 "trtype": "TCP", 00:22:09.167 "adrfam": "IPv4", 00:22:09.167 "traddr": "10.0.0.2", 00:22:09.167 "trsvcid": "4420" 00:22:09.167 }, 00:22:09.167 "peer_address": { 00:22:09.167 "trtype": "TCP", 00:22:09.167 "adrfam": "IPv4", 00:22:09.167 "traddr": "10.0.0.1", 00:22:09.167 "trsvcid": "33480" 00:22:09.167 }, 00:22:09.167 "auth": { 00:22:09.167 "state": "completed", 00:22:09.167 "digest": "sha512", 00:22:09.167 "dhgroup": "ffdhe8192" 00:22:09.167 } 00:22:09.167 } 00:22:09.167 ]' 00:22:09.168 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:09.168 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.168 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:09.425 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.425 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:09.425 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.425 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.425 20:09:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.683 20:09:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTQxNjU4YjFjZjJhZjhmYTJkMDRiZDg1YjFiZDU0MWY3ZWVjYjcyMDgyOGEwOTM5ZAUjTg==: --dhchap-ctrl-secret DHHC-1:03:NWMyYjJjODg2ZDdhMTNhNjk2Njc4OWY4NDQ5MjhlYWI5ZjgyYjJjNDdlZmIyOTk1NGI0ZjgyYmQwZTQ2MDE3NHq6S24=: 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:10.617 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:10.618 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:10.618 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:11.552 request: 00:22:11.552 { 00:22:11.552 "name": "nvme0", 00:22:11.552 "trtype": "tcp", 00:22:11.552 "traddr": "10.0.0.2", 00:22:11.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:11.552 "adrfam": "ipv4", 00:22:11.552 "trsvcid": "4420", 00:22:11.552 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.552 "dhchap_key": "key2", 00:22:11.552 "method": "bdev_nvme_attach_controller", 00:22:11.552 "req_id": 1 00:22:11.552 } 00:22:11.552 Got JSON-RPC error response 00:22:11.552 response: 00:22:11.552 { 00:22:11.552 "code": -5, 00:22:11.552 "message": "Input/output error" 00:22:11.552 } 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:11.552 20:09:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:12.492 request: 00:22:12.492 { 00:22:12.492 "name": "nvme0", 00:22:12.492 "trtype": "tcp", 00:22:12.492 "traddr": "10.0.0.2", 00:22:12.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.492 "adrfam": "ipv4", 00:22:12.492 "trsvcid": "4420", 00:22:12.492 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.492 "dhchap_key": "key1", 00:22:12.492 "dhchap_ctrlr_key": "ckey2", 00:22:12.492 "method": "bdev_nvme_attach_controller", 00:22:12.492 "req_id": 1 00:22:12.492 } 00:22:12.492 Got JSON-RPC error response 00:22:12.492 response: 00:22:12.492 { 00:22:12.492 "code": -5, 00:22:12.492 "message": "Input/output error" 00:22:12.492 } 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.492 20:09:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.061 request: 00:22:13.061 { 00:22:13.061 "name": "nvme0", 00:22:13.061 "trtype": "tcp", 00:22:13.061 "traddr": "10.0.0.2", 00:22:13.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:13.061 "adrfam": "ipv4", 00:22:13.061 "trsvcid": "4420", 00:22:13.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:13.061 "dhchap_key": "key1", 00:22:13.061 "dhchap_ctrlr_key": "ckey1", 00:22:13.061 "method": "bdev_nvme_attach_controller", 00:22:13.061 "req_id": 1 00:22:13.061 } 00:22:13.061 Got JSON-RPC error response 00:22:13.061 response: 00:22:13.061 { 00:22:13.061 "code": -5, 00:22:13.061 "message": "Input/output error" 00:22:13.061 } 00:22:13.061 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:13.061 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:13.061 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:13.061 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3203697 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3203697 ']' 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3203697 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3203697 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3203697' 00:22:13.062 killing process with pid 3203697 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3203697 00:22:13.062 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3203697 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3226205 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3226205 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3226205 ']' 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:13.320 20:10:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3226205 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 3226205 ']' 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:13.579 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:14.144 20:10:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:15.080 00:22:15.080 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:15.080 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:15.080 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:15.340 { 00:22:15.340 "cntlid": 1, 00:22:15.340 "qid": 0, 00:22:15.340 "state": "enabled", 00:22:15.340 "listen_address": { 00:22:15.340 "trtype": "TCP", 00:22:15.340 "adrfam": "IPv4", 00:22:15.340 "traddr": "10.0.0.2", 00:22:15.340 "trsvcid": "4420" 00:22:15.340 }, 00:22:15.340 "peer_address": { 00:22:15.340 "trtype": "TCP", 00:22:15.340 "adrfam": "IPv4", 00:22:15.340 "traddr": "10.0.0.1", 00:22:15.340 "trsvcid": "50856" 00:22:15.340 }, 00:22:15.340 "auth": { 00:22:15.340 "state": "completed", 00:22:15.340 "digest": "sha512", 00:22:15.340 "dhgroup": "ffdhe8192" 00:22:15.340 } 00:22:15.340 } 00:22:15.340 ]' 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.340 20:10:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.599 20:10:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Yzk5ZWE5MjA4OGZlMjkzNzVjYzJhYjBjZWMwZGRjMGFjNjkxZTBlMjc5ZTNkYTMwN2I2YjNmNzc1MGRjMjJmZvm0wZ0=: 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:16.536 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:16.795 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.054 request: 00:22:17.054 { 00:22:17.054 "name": "nvme0", 00:22:17.054 "trtype": "tcp", 00:22:17.054 "traddr": "10.0.0.2", 00:22:17.054 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.054 "adrfam": "ipv4", 00:22:17.054 "trsvcid": "4420", 00:22:17.054 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.054 "dhchap_key": "key3", 00:22:17.054 "method": "bdev_nvme_attach_controller", 00:22:17.054 "req_id": 1 00:22:17.054 } 00:22:17.054 Got JSON-RPC error response 00:22:17.054 response: 00:22:17.054 { 00:22:17.054 "code": -5, 00:22:17.054 "message": "Input/output error" 00:22:17.054 } 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:17.054 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:17.621 20:10:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:17.621 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.621 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:17.881 request: 00:22:17.881 { 00:22:17.881 "name": "nvme0", 00:22:17.881 "trtype": "tcp", 00:22:17.881 "traddr": "10.0.0.2", 00:22:17.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.881 "adrfam": "ipv4", 00:22:17.881 "trsvcid": "4420", 00:22:17.881 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:17.881 "dhchap_key": "key3", 00:22:17.881 "method": "bdev_nvme_attach_controller", 00:22:17.881 "req_id": 1 00:22:17.881 } 00:22:17.881 Got JSON-RPC error response 00:22:17.881 response: 00:22:17.881 { 00:22:17.881 "code": -5, 00:22:17.881 "message": "Input/output error" 00:22:17.881 } 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:17.881 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.139 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.397 request: 00:22:18.397 { 00:22:18.397 "name": "nvme0", 00:22:18.397 "trtype": "tcp", 00:22:18.397 "traddr": "10.0.0.2", 00:22:18.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:18.397 "adrfam": "ipv4", 00:22:18.397 "trsvcid": "4420", 00:22:18.397 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.397 "dhchap_key": "key0", 00:22:18.397 "dhchap_ctrlr_key": "key1", 00:22:18.397 "method": "bdev_nvme_attach_controller", 00:22:18.397 "req_id": 1 00:22:18.397 } 00:22:18.397 Got JSON-RPC error response 00:22:18.397 response: 00:22:18.397 { 00:22:18.397 "code": -5, 00:22:18.397 "message": "Input/output error" 00:22:18.397 } 00:22:18.397 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:18.397 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:18.397 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:18.397 20:10:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:18.397 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:18.397 20:10:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:18.655 00:22:18.655 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:18.655 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:18.655 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.913 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.913 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.913 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3203722 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3203722 ']' 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3203722 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3203722 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3203722' 00:22:19.172 killing process with pid 3203722 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3203722 00:22:19.172 20:10:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3203722 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:19.431 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:19.691 rmmod nvme_tcp 00:22:19.691 rmmod nvme_fabrics 00:22:19.691 rmmod nvme_keyring 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3226205 ']' 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3226205 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 3226205 ']' 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 3226205 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3226205 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3226205' 00:22:19.691 killing process with pid 3226205 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 3226205 00:22:19.691 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 3226205 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:19.950 20:10:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.854 20:10:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:21.854 20:10:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DLW /tmp/spdk.key-sha256.0mC /tmp/spdk.key-sha384.qJ3 /tmp/spdk.key-sha512.kkZ /tmp/spdk.key-sha512.ZY0 /tmp/spdk.key-sha384.LyW /tmp/spdk.key-sha256.6Fz '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:21.854 00:22:21.854 real 3m9.259s 00:22:21.854 user 7m20.357s 00:22:21.854 sys 0m24.926s 00:22:21.854 20:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:21.854 20:10:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.854 ************************************ 00:22:21.854 END TEST nvmf_auth_target 00:22:21.854 ************************************ 00:22:21.854 20:10:09 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:21.854 20:10:09 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:21.854 20:10:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:21.854 20:10:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:21.854 20:10:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.113 ************************************ 00:22:22.113 START TEST nvmf_bdevio_no_huge 00:22:22.113 ************************************ 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:22.113 * Looking for test storage... 00:22:22.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:22.113 20:10:09 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:24.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:24.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:24.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:24.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:24.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:24.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:22:24.017 00:22:24.017 --- 10.0.0.2 ping statistics --- 00:22:24.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.017 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:24.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:24.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:22:24.017 00:22:24.017 --- 10.0.0.1 ping statistics --- 00:22:24.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:24.017 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:24.017 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:24.018 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3228970 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3228970 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 3228970 ']' 00:22:24.278 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.279 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:24.279 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.279 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:24.279 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.279 [2024-07-13 20:10:11.721433] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:24.279 [2024-07-13 20:10:11.721520] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:24.279 [2024-07-13 20:10:11.790798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.279 [2024-07-13 20:10:11.886604] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.279 [2024-07-13 20:10:11.886673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.279 [2024-07-13 20:10:11.886690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.279 [2024-07-13 20:10:11.886709] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.279 [2024-07-13 20:10:11.886721] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.279 [2024-07-13 20:10:11.886816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:24.279 [2024-07-13 20:10:11.886882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:24.279 [2024-07-13 20:10:11.886940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:24.279 [2024-07-13 20:10:11.886943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.540 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:24.540 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:22:24.540 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:24.540 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:24.540 20:10:11 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.540 [2024-07-13 20:10:12.015207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.540 Malloc0 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.540 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.541 [2024-07-13 20:10:12.053510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.541 { 00:22:24.541 "params": { 00:22:24.541 "name": "Nvme$subsystem", 00:22:24.541 "trtype": "$TEST_TRANSPORT", 00:22:24.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.541 "adrfam": "ipv4", 00:22:24.541 "trsvcid": "$NVMF_PORT", 00:22:24.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.541 "hdgst": ${hdgst:-false}, 00:22:24.541 "ddgst": ${ddgst:-false} 00:22:24.541 }, 00:22:24.541 "method": "bdev_nvme_attach_controller" 00:22:24.541 } 00:22:24.541 EOF 00:22:24.541 )") 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:24.541 20:10:12 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:24.541 "params": { 00:22:24.541 "name": "Nvme1", 00:22:24.541 "trtype": "tcp", 00:22:24.541 "traddr": "10.0.0.2", 00:22:24.541 "adrfam": "ipv4", 00:22:24.541 "trsvcid": "4420", 00:22:24.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.541 "hdgst": false, 00:22:24.541 "ddgst": false 00:22:24.541 }, 00:22:24.541 "method": "bdev_nvme_attach_controller" 00:22:24.541 }' 00:22:24.541 [2024-07-13 20:10:12.100436] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:24.541 [2024-07-13 20:10:12.100502] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3228994 ] 00:22:24.541 [2024-07-13 20:10:12.160262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:24.800 [2024-07-13 20:10:12.247032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.800 [2024-07-13 20:10:12.247085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.800 [2024-07-13 20:10:12.247088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.800 I/O targets: 00:22:24.800 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:24.800 00:22:24.800 00:22:24.800 CUnit - A unit testing framework for C - Version 2.1-3 00:22:24.800 http://cunit.sourceforge.net/ 00:22:24.800 00:22:24.800 00:22:24.800 Suite: bdevio tests on: Nvme1n1 00:22:24.800 Test: blockdev write read block ...passed 00:22:25.059 Test: blockdev write zeroes read block ...passed 00:22:25.059 Test: blockdev write zeroes read no split ...passed 00:22:25.059 Test: blockdev write zeroes read split ...passed 00:22:25.059 Test: blockdev write zeroes read split partial ...passed 00:22:25.059 Test: blockdev reset ...[2024-07-13 20:10:12.612362] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.059 [2024-07-13 20:10:12.612475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4ce2a0 (9): Bad file descriptor 00:22:25.059 [2024-07-13 20:10:12.627600] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.059 passed 00:22:25.059 Test: blockdev write read 8 blocks ...passed 00:22:25.059 Test: blockdev write read size > 128k ...passed 00:22:25.059 Test: blockdev write read invalid size ...passed 00:22:25.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:25.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:25.059 Test: blockdev write read max offset ...passed 00:22:25.317 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:25.317 Test: blockdev writev readv 8 blocks ...passed 00:22:25.317 Test: blockdev writev readv 30 x 1block ...passed 00:22:25.317 Test: blockdev writev readv block ...passed 00:22:25.317 Test: blockdev writev readv size > 128k ...passed 00:22:25.317 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:25.317 Test: blockdev comparev and writev ...[2024-07-13 20:10:12.923589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.923625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.923650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.923667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.924070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.924102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.924126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.924141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.924517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.924543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.924568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.924598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.924996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.925022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:25.317 [2024-07-13 20:10:12.925045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:25.317 [2024-07-13 20:10:12.925061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:25.317 passed 00:22:25.574 Test: blockdev nvme passthru rw ...passed 00:22:25.574 Test: blockdev nvme passthru vendor specific ...[2024-07-13 20:10:13.007238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.574 [2024-07-13 20:10:13.007265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:25.574 [2024-07-13 20:10:13.007470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.574 [2024-07-13 20:10:13.007495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:25.574 [2024-07-13 20:10:13.007691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.574 [2024-07-13 20:10:13.007715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:25.574 [2024-07-13 20:10:13.007917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:25.574 [2024-07-13 20:10:13.007941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:25.574 passed 00:22:25.574 Test: blockdev nvme admin passthru ...passed 00:22:25.574 Test: blockdev copy ...passed 00:22:25.574 00:22:25.574 Run Summary: Type Total Ran Passed Failed Inactive 00:22:25.574 suites 1 1 n/a 0 0 00:22:25.574 tests 23 23 23 0 0 00:22:25.574 asserts 152 152 152 0 n/a 00:22:25.574 00:22:25.574 Elapsed time = 1.330 seconds 00:22:25.831 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:25.831 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.831 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:25.831 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:25.832 rmmod nvme_tcp 00:22:25.832 rmmod nvme_fabrics 00:22:25.832 rmmod nvme_keyring 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3228970 ']' 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3228970 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 3228970 ']' 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 3228970 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3228970 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3228970' 00:22:25.832 killing process with pid 3228970 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 3228970 00:22:25.832 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 3228970 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.400 20:10:13 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.302 20:10:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:28.302 00:22:28.303 real 0m6.352s 00:22:28.303 user 0m10.222s 00:22:28.303 sys 0m2.483s 00:22:28.303 20:10:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:28.303 20:10:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.303 ************************************ 00:22:28.303 END TEST nvmf_bdevio_no_huge 00:22:28.303 ************************************ 00:22:28.303 20:10:15 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:28.303 20:10:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:28.303 20:10:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:28.303 20:10:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:28.303 ************************************ 00:22:28.303 START TEST nvmf_tls 00:22:28.303 ************************************ 00:22:28.303 20:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:28.563 * Looking for test storage... 00:22:28.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:28.563 20:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.467 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:30.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:30.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:30.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:30.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.468 20:10:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:22:30.468 00:22:30.468 --- 10.0.0.2 ping statistics --- 00:22:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.468 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:22:30.468 00:22:30.468 --- 10.0.0.1 ping statistics --- 00:22:30.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.468 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3231124 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:30.468 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3231124 00:22:30.469 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3231124 ']' 00:22:30.469 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.469 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.469 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.469 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.469 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.734 [2024-07-13 20:10:18.140332] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:30.734 [2024-07-13 20:10:18.140403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.734 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.734 [2024-07-13 20:10:18.206349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.734 [2024-07-13 20:10:18.289793] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.734 [2024-07-13 20:10:18.289846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.734 [2024-07-13 20:10:18.289877] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.734 [2024-07-13 20:10:18.289889] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.734 [2024-07-13 20:10:18.289899] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.734 [2024-07-13 20:10:18.289931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:30.734 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:31.025 true 00:22:31.025 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.025 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:31.283 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:31.283 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:31.283 20:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:31.542 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:31.542 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:31.802 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:31.802 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:31.802 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:32.062 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.062 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:32.320 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:32.320 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:32.320 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.320 20:10:19 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:32.578 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:32.578 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:32.578 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:32.835 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:32.835 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:33.092 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:33.092 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:33.092 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:33.352 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:33.352 20:10:20 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.fDJ9sCHpSn 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.iVxDVJYhuK 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.fDJ9sCHpSn 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iVxDVJYhuK 00:22:33.610 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:33.868 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:34.433 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.fDJ9sCHpSn 00:22:34.433 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.fDJ9sCHpSn 00:22:34.433 20:10:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:34.433 [2024-07-13 20:10:22.069203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.433 20:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:34.690 20:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:34.947 [2024-07-13 20:10:22.542459] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:34.947 [2024-07-13 20:10:22.542699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.947 20:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.204 malloc0 00:22:35.204 20:10:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:35.459 20:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fDJ9sCHpSn 00:22:35.716 [2024-07-13 20:10:23.284428] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:35.716 20:10:23 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.fDJ9sCHpSn 00:22:35.716 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.932 Initializing NVMe Controllers 00:22:47.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.933 Initialization complete. Launching workers. 00:22:47.933 ======================================================== 00:22:47.933 Latency(us) 00:22:47.933 Device Information : IOPS MiB/s Average min max 00:22:47.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7810.96 30.51 8196.43 1131.67 9691.07 00:22:47.933 ======================================================== 00:22:47.933 Total : 7810.96 30.51 8196.43 1131.67 9691.07 00:22:47.933 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fDJ9sCHpSn 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fDJ9sCHpSn' 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3232950 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3232950 /var/tmp/bdevperf.sock 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3232950 ']' 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.933 [2024-07-13 20:10:33.455189] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:47.933 [2024-07-13 20:10:33.455268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232950 ] 00:22:47.933 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.933 [2024-07-13 20:10:33.511790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.933 [2024-07-13 20:10:33.594815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:47.933 20:10:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fDJ9sCHpSn 00:22:47.933 [2024-07-13 20:10:33.942373] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:47.933 [2024-07-13 20:10:33.942501] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:47.933 TLSTESTn1 00:22:47.933 20:10:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:47.933 Running I/O for 10 seconds... 00:22:57.899 00:22:57.899 Latency(us) 00:22:57.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.899 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:57.899 Verification LBA range: start 0x0 length 0x2000 00:22:57.899 TLSTESTn1 : 10.06 1937.91 7.57 0.00 0.00 65863.95 9077.95 95536.92 00:22:57.899 =================================================================================================================== 00:22:57.899 Total : 1937.91 7.57 0.00 0.00 65863.95 9077.95 95536.92 00:22:57.899 0 00:22:57.899 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.899 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3232950 00:22:57.899 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3232950 ']' 00:22:57.899 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3232950 00:22:57.899 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3232950 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3232950' 00:22:57.900 killing process with pid 3232950 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3232950 00:22:57.900 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.900 00:22:57.900 Latency(us) 00:22:57.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.900 =================================================================================================================== 00:22:57.900 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.900 [2024-07-13 20:10:44.258739] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3232950 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVxDVJYhuK 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVxDVJYhuK 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iVxDVJYhuK 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iVxDVJYhuK' 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3234260 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3234260 /var/tmp/bdevperf.sock 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3234260 ']' 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.900 [2024-07-13 20:10:44.521791] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:57.900 [2024-07-13 20:10:44.521889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234260 ] 00:22:57.900 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.900 [2024-07-13 20:10:44.580076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.900 [2024-07-13 20:10:44.664899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:57.900 20:10:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iVxDVJYhuK 00:22:57.900 [2024-07-13 20:10:45.033415] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:57.900 [2024-07-13 20:10:45.033550] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:57.900 [2024-07-13 20:10:45.038810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:57.900 [2024-07-13 20:10:45.039373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd840 (107): Transport endpoint is not connected 00:22:57.900 [2024-07-13 20:10:45.040361] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dd840 (9): Bad file descriptor 00:22:57.900 [2024-07-13 20:10:45.041358] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:57.900 [2024-07-13 20:10:45.041378] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:57.900 [2024-07-13 20:10:45.041412] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:57.900 request: 00:22:57.900 { 00:22:57.900 "name": "TLSTEST", 00:22:57.900 "trtype": "tcp", 00:22:57.900 "traddr": "10.0.0.2", 00:22:57.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.900 "adrfam": "ipv4", 00:22:57.900 "trsvcid": "4420", 00:22:57.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.900 "psk": "/tmp/tmp.iVxDVJYhuK", 00:22:57.900 "method": "bdev_nvme_attach_controller", 00:22:57.900 "req_id": 1 00:22:57.900 } 00:22:57.900 Got JSON-RPC error response 00:22:57.900 response: 00:22:57.900 { 00:22:57.900 "code": -5, 00:22:57.900 "message": "Input/output error" 00:22:57.900 } 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3234260 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3234260 ']' 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3234260 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3234260 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3234260' 00:22:57.900 killing process with pid 3234260 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3234260 00:22:57.900 Received shutdown signal, test time was about 10.000000 seconds 00:22:57.900 00:22:57.900 Latency(us) 00:22:57.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.900 =================================================================================================================== 00:22:57.900 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:57.900 [2024-07-13 20:10:45.085277] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3234260 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fDJ9sCHpSn 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fDJ9sCHpSn 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fDJ9sCHpSn 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fDJ9sCHpSn' 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3234284 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3234284 /var/tmp/bdevperf.sock 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3234284 ']' 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:57.900 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.900 [2024-07-13 20:10:45.315237] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:57.900 [2024-07-13 20:10:45.315320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234284 ] 00:22:57.900 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.900 [2024-07-13 20:10:45.373622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.900 [2024-07-13 20:10:45.462072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.160 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.160 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:58.160 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.fDJ9sCHpSn 00:22:58.160 [2024-07-13 20:10:45.788080] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.160 [2024-07-13 20:10:45.788214] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:58.160 [2024-07-13 20:10:45.798062] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:58.160 [2024-07-13 20:10:45.798095] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:58.160 [2024-07-13 20:10:45.798150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:58.160 [2024-07-13 20:10:45.799108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2502840 (107): Transport endpoint is not connected 00:22:58.160 [2024-07-13 20:10:45.800097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2502840 (9): Bad file descriptor 00:22:58.160 [2024-07-13 20:10:45.801097] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.160 [2024-07-13 20:10:45.801115] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:58.160 [2024-07-13 20:10:45.801146] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.160 request: 00:22:58.160 { 00:22:58.160 "name": "TLSTEST", 00:22:58.160 "trtype": "tcp", 00:22:58.160 "traddr": "10.0.0.2", 00:22:58.160 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:58.160 "adrfam": "ipv4", 00:22:58.160 "trsvcid": "4420", 00:22:58.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.160 "psk": "/tmp/tmp.fDJ9sCHpSn", 00:22:58.160 "method": "bdev_nvme_attach_controller", 00:22:58.160 "req_id": 1 00:22:58.160 } 00:22:58.160 Got JSON-RPC error response 00:22:58.160 response: 00:22:58.160 { 00:22:58.160 "code": -5, 00:22:58.160 "message": "Input/output error" 00:22:58.160 } 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3234284 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3234284 ']' 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3234284 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3234284 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:58.418 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:58.419 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3234284' 00:22:58.419 killing process with pid 3234284 00:22:58.419 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3234284 00:22:58.419 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.419 00:22:58.419 Latency(us) 00:22:58.419 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.419 =================================================================================================================== 00:22:58.419 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:58.419 [2024-07-13 20:10:45.852394] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:58.419 20:10:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3234284 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fDJ9sCHpSn 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fDJ9sCHpSn 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fDJ9sCHpSn 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fDJ9sCHpSn' 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3234421 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.419 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3234421 /var/tmp/bdevperf.sock 00:22:58.679 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3234421 ']' 00:22:58.679 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.679 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:58.679 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.679 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:58.679 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.679 [2024-07-13 20:10:46.116131] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:58.679 [2024-07-13 20:10:46.116223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234421 ] 00:22:58.679 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.679 [2024-07-13 20:10:46.173178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.679 [2024-07-13 20:10:46.253242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.937 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:58.937 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:58.937 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fDJ9sCHpSn 00:22:58.937 [2024-07-13 20:10:46.593128] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.937 [2024-07-13 20:10:46.593279] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:59.233 [2024-07-13 20:10:46.600138] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:59.233 [2024-07-13 20:10:46.600184] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:59.233 [2024-07-13 20:10:46.600239] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.233 [2024-07-13 20:10:46.600408] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9f840 (107): Transport endpoint is not connected 00:22:59.233 [2024-07-13 20:10:46.601310] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9f840 (9): Bad file descriptor 00:22:59.233 [2024-07-13 20:10:46.602309] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:59.233 [2024-07-13 20:10:46.602330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.233 [2024-07-13 20:10:46.602362] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:59.233 request: 00:22:59.233 { 00:22:59.233 "name": "TLSTEST", 00:22:59.233 "trtype": "tcp", 00:22:59.233 "traddr": "10.0.0.2", 00:22:59.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.233 "adrfam": "ipv4", 00:22:59.233 "trsvcid": "4420", 00:22:59.233 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:59.233 "psk": "/tmp/tmp.fDJ9sCHpSn", 00:22:59.233 "method": "bdev_nvme_attach_controller", 00:22:59.233 "req_id": 1 00:22:59.233 } 00:22:59.233 Got JSON-RPC error response 00:22:59.233 response: 00:22:59.233 { 00:22:59.233 "code": -5, 00:22:59.233 "message": "Input/output error" 00:22:59.233 } 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3234421 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3234421 ']' 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3234421 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3234421 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3234421' 00:22:59.233 killing process with pid 3234421 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3234421 00:22:59.233 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.233 00:22:59.233 Latency(us) 00:22:59.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.233 =================================================================================================================== 00:22:59.233 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.233 [2024-07-13 20:10:46.652452] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.233 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3234421 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3234560 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3234560 /var/tmp/bdevperf.sock 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3234560 ']' 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:59.514 20:10:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.514 [2024-07-13 20:10:46.910105] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:22:59.514 [2024-07-13 20:10:46.910212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234560 ] 00:22:59.514 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.514 [2024-07-13 20:10:46.970491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.514 [2024-07-13 20:10:47.056781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.514 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:59.514 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:22:59.514 20:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:00.080 [2024-07-13 20:10:47.442454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.080 [2024-07-13 20:10:47.444505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212ef10 (9): Bad file descriptor 00:23:00.080 [2024-07-13 20:10:47.445499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.080 [2024-07-13 20:10:47.445519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.080 [2024-07-13 20:10:47.445551] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.080 request: 00:23:00.080 { 00:23:00.080 "name": "TLSTEST", 00:23:00.080 "trtype": "tcp", 00:23:00.080 "traddr": "10.0.0.2", 00:23:00.081 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.081 "adrfam": "ipv4", 00:23:00.081 "trsvcid": "4420", 00:23:00.081 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.081 "method": "bdev_nvme_attach_controller", 00:23:00.081 "req_id": 1 00:23:00.081 } 00:23:00.081 Got JSON-RPC error response 00:23:00.081 response: 00:23:00.081 { 00:23:00.081 "code": -5, 00:23:00.081 "message": "Input/output error" 00:23:00.081 } 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3234560 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3234560 ']' 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3234560 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3234560 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3234560' 00:23:00.081 killing process with pid 3234560 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3234560 00:23:00.081 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.081 00:23:00.081 Latency(us) 00:23:00.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.081 =================================================================================================================== 00:23:00.081 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3234560 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3231124 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3231124 ']' 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3231124 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3231124 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3231124' 00:23:00.081 killing process with pid 3231124 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3231124 00:23:00.081 [2024-07-13 20:10:47.735800] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:00.081 20:10:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3231124 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:00.341 20:10:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ksIJTUxwMO 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ksIJTUxwMO 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3234707 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3234707 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3234707 ']' 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:00.601 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.601 [2024-07-13 20:10:48.083831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:00.601 [2024-07-13 20:10:48.083932] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.601 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.601 [2024-07-13 20:10:48.148772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.601 [2024-07-13 20:10:48.235811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.601 [2024-07-13 20:10:48.235895] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.601 [2024-07-13 20:10:48.235919] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.601 [2024-07-13 20:10:48.235931] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.601 [2024-07-13 20:10:48.235941] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.601 [2024-07-13 20:10:48.235984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ksIJTUxwMO 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksIJTUxwMO 00:23:00.860 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:01.118 [2024-07-13 20:10:48.637892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.118 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:01.376 20:10:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:01.635 [2024-07-13 20:10:49.123182] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.635 [2024-07-13 20:10:49.123427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.635 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:01.894 malloc0 00:23:01.894 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:02.151 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:02.410 [2024-07-13 20:10:49.849353] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksIJTUxwMO 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ksIJTUxwMO' 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3234871 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3234871 /var/tmp/bdevperf.sock 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3234871 ']' 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:02.410 20:10:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.410 [2024-07-13 20:10:49.911775] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:02.410 [2024-07-13 20:10:49.911876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234871 ] 00:23:02.410 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.410 [2024-07-13 20:10:49.972577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.410 [2024-07-13 20:10:50.066346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.668 20:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:02.668 20:10:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:02.668 20:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:02.926 [2024-07-13 20:10:50.410490] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.926 [2024-07-13 20:10:50.410613] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:02.926 TLSTESTn1 00:23:02.926 20:10:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:03.186 Running I/O for 10 seconds... 00:23:13.166 00:23:13.166 Latency(us) 00:23:13.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.166 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.166 Verification LBA range: start 0x0 length 0x2000 00:23:13.166 TLSTESTn1 : 10.09 1573.29 6.15 0.00 0.00 81057.27 6650.69 121168.78 00:23:13.166 =================================================================================================================== 00:23:13.166 Total : 1573.29 6.15 0.00 0.00 81057.27 6650.69 121168.78 00:23:13.166 0 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3234871 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3234871 ']' 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3234871 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3234871 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3234871' 00:23:13.166 killing process with pid 3234871 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3234871 00:23:13.166 Received shutdown signal, test time was about 10.000000 seconds 00:23:13.166 00:23:13.166 Latency(us) 00:23:13.166 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.166 =================================================================================================================== 00:23:13.166 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:13.166 [2024-07-13 20:11:00.767017] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:13.166 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3234871 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ksIJTUxwMO 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksIJTUxwMO 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksIJTUxwMO 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ksIJTUxwMO 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ksIJTUxwMO' 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3236182 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3236182 /var/tmp/bdevperf.sock 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3236182 ']' 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:13.424 20:11:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:13.424 [2024-07-13 20:11:01.036103] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:13.424 [2024-07-13 20:11:01.036210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236182 ] 00:23:13.424 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.682 [2024-07-13 20:11:01.096018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.682 [2024-07-13 20:11:01.176719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.682 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.682 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:13.682 20:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:13.941 [2024-07-13 20:11:01.551305] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:13.941 [2024-07-13 20:11:01.551401] bdev_nvme.c:6122:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:13.941 [2024-07-13 20:11:01.551416] bdev_nvme.c:6231:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ksIJTUxwMO 00:23:13.941 request: 00:23:13.941 { 00:23:13.941 "name": "TLSTEST", 00:23:13.941 "trtype": "tcp", 00:23:13.941 "traddr": "10.0.0.2", 00:23:13.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.941 "adrfam": "ipv4", 00:23:13.941 "trsvcid": "4420", 00:23:13.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.941 "psk": "/tmp/tmp.ksIJTUxwMO", 00:23:13.941 "method": "bdev_nvme_attach_controller", 00:23:13.941 "req_id": 1 00:23:13.941 } 00:23:13.941 Got JSON-RPC error response 00:23:13.941 response: 00:23:13.941 { 00:23:13.941 "code": -1, 00:23:13.941 "message": "Operation not permitted" 00:23:13.941 } 00:23:13.941 20:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3236182 00:23:13.941 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3236182 ']' 00:23:13.941 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3236182 00:23:13.941 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:13.941 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:13.941 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3236182 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3236182' 00:23:14.200 killing process with pid 3236182 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3236182 00:23:14.200 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.200 00:23:14.200 Latency(us) 00:23:14.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.200 =================================================================================================================== 00:23:14.200 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3236182 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3234707 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3234707 ']' 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3234707 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3234707 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3234707' 00:23:14.200 killing process with pid 3234707 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3234707 00:23:14.200 [2024-07-13 20:11:01.851447] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.200 20:11:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3234707 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3236334 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3236334 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3236334 ']' 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.459 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.718 [2024-07-13 20:11:02.133362] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:14.718 [2024-07-13 20:11:02.133439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.718 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.718 [2024-07-13 20:11:02.196438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.718 [2024-07-13 20:11:02.279087] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.718 [2024-07-13 20:11:02.279141] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.718 [2024-07-13 20:11:02.279172] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.718 [2024-07-13 20:11:02.279185] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.718 [2024-07-13 20:11:02.279195] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.718 [2024-07-13 20:11:02.279238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ksIJTUxwMO 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ksIJTUxwMO 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.ksIJTUxwMO 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksIJTUxwMO 00:23:14.978 20:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:15.236 [2024-07-13 20:11:02.693729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.236 20:11:02 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:15.493 20:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:15.749 [2024-07-13 20:11:03.235164] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.749 [2024-07-13 20:11:03.235412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.749 20:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:16.006 malloc0 00:23:16.006 20:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:16.263 20:11:03 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:16.521 [2024-07-13 20:11:04.061188] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:16.521 [2024-07-13 20:11:04.061233] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:16.521 [2024-07-13 20:11:04.061279] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:16.521 request: 00:23:16.521 { 00:23:16.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.521 "host": "nqn.2016-06.io.spdk:host1", 00:23:16.521 "psk": "/tmp/tmp.ksIJTUxwMO", 00:23:16.521 "method": "nvmf_subsystem_add_host", 00:23:16.521 "req_id": 1 00:23:16.521 } 00:23:16.521 Got JSON-RPC error response 00:23:16.521 response: 00:23:16.521 { 00:23:16.521 "code": -32603, 00:23:16.521 "message": "Internal error" 00:23:16.521 } 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3236334 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3236334 ']' 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3236334 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3236334 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3236334' 00:23:16.521 killing process with pid 3236334 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3236334 00:23:16.521 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3236334 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ksIJTUxwMO 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3236626 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3236626 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3236626 ']' 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:16.779 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.779 [2024-07-13 20:11:04.401408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:16.779 [2024-07-13 20:11:04.401496] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.779 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.037 [2024-07-13 20:11:04.468581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.037 [2024-07-13 20:11:04.556669] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.037 [2024-07-13 20:11:04.556734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.037 [2024-07-13 20:11:04.556751] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.037 [2024-07-13 20:11:04.556764] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.037 [2024-07-13 20:11:04.556776] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.037 [2024-07-13 20:11:04.556814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.037 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:17.037 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:17.037 20:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.037 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.037 20:11:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.296 20:11:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.296 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ksIJTUxwMO 00:23:17.296 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksIJTUxwMO 00:23:17.296 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:17.296 [2024-07-13 20:11:04.929080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.296 20:11:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:17.556 20:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:17.815 [2024-07-13 20:11:05.418414] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.815 [2024-07-13 20:11:05.418671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.815 20:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:18.073 malloc0 00:23:18.073 20:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:18.333 20:11:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:18.932 [2024-07-13 20:11:06.260695] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3236905 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3236905 /var/tmp/bdevperf.sock 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3236905 ']' 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.932 [2024-07-13 20:11:06.325505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:18.932 [2024-07-13 20:11:06.325592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236905 ] 00:23:18.932 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.932 [2024-07-13 20:11:06.391116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.932 [2024-07-13 20:11:06.480079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:18.932 20:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:19.495 [2024-07-13 20:11:06.855682] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.495 [2024-07-13 20:11:06.855801] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:19.495 TLSTESTn1 00:23:19.495 20:11:06 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:19.753 20:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:19.753 "subsystems": [ 00:23:19.753 { 00:23:19.753 "subsystem": "keyring", 00:23:19.753 "config": [] 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "subsystem": "iobuf", 00:23:19.753 "config": [ 00:23:19.753 { 00:23:19.753 "method": "iobuf_set_options", 00:23:19.753 "params": { 00:23:19.753 "small_pool_count": 8192, 00:23:19.753 "large_pool_count": 1024, 00:23:19.753 "small_bufsize": 8192, 00:23:19.753 "large_bufsize": 135168 00:23:19.753 } 00:23:19.753 } 00:23:19.753 ] 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "subsystem": "sock", 00:23:19.753 "config": [ 00:23:19.753 { 00:23:19.753 "method": "sock_set_default_impl", 00:23:19.753 "params": { 00:23:19.753 "impl_name": "posix" 00:23:19.753 } 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "method": "sock_impl_set_options", 00:23:19.753 "params": { 00:23:19.753 "impl_name": "ssl", 00:23:19.753 "recv_buf_size": 4096, 00:23:19.753 "send_buf_size": 4096, 00:23:19.753 "enable_recv_pipe": true, 00:23:19.753 "enable_quickack": false, 00:23:19.753 "enable_placement_id": 0, 00:23:19.753 "enable_zerocopy_send_server": true, 00:23:19.753 "enable_zerocopy_send_client": false, 00:23:19.753 "zerocopy_threshold": 0, 00:23:19.753 "tls_version": 0, 00:23:19.753 "enable_ktls": false 00:23:19.753 } 00:23:19.753 }, 00:23:19.753 { 00:23:19.753 "method": "sock_impl_set_options", 00:23:19.753 "params": { 00:23:19.753 "impl_name": "posix", 00:23:19.753 "recv_buf_size": 2097152, 00:23:19.753 "send_buf_size": 2097152, 00:23:19.753 "enable_recv_pipe": true, 00:23:19.753 "enable_quickack": false, 00:23:19.753 "enable_placement_id": 0, 00:23:19.753 "enable_zerocopy_send_server": true, 00:23:19.753 "enable_zerocopy_send_client": false, 00:23:19.753 "zerocopy_threshold": 0, 00:23:19.753 "tls_version": 0, 00:23:19.754 "enable_ktls": false 00:23:19.754 } 00:23:19.754 } 00:23:19.754 ] 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "subsystem": "vmd", 00:23:19.754 "config": [] 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "subsystem": "accel", 00:23:19.754 "config": [ 00:23:19.754 { 00:23:19.754 "method": "accel_set_options", 00:23:19.754 "params": { 00:23:19.754 "small_cache_size": 128, 00:23:19.754 "large_cache_size": 16, 00:23:19.754 "task_count": 2048, 00:23:19.754 "sequence_count": 2048, 00:23:19.754 "buf_count": 2048 00:23:19.754 } 00:23:19.754 } 00:23:19.754 ] 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "subsystem": "bdev", 00:23:19.754 "config": [ 00:23:19.754 { 00:23:19.754 "method": "bdev_set_options", 00:23:19.754 "params": { 00:23:19.754 "bdev_io_pool_size": 65535, 00:23:19.754 "bdev_io_cache_size": 256, 00:23:19.754 "bdev_auto_examine": true, 00:23:19.754 "iobuf_small_cache_size": 128, 00:23:19.754 "iobuf_large_cache_size": 16 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "bdev_raid_set_options", 00:23:19.754 "params": { 00:23:19.754 "process_window_size_kb": 1024 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "bdev_iscsi_set_options", 00:23:19.754 "params": { 00:23:19.754 "timeout_sec": 30 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "bdev_nvme_set_options", 00:23:19.754 "params": { 00:23:19.754 "action_on_timeout": "none", 00:23:19.754 "timeout_us": 0, 00:23:19.754 "timeout_admin_us": 0, 00:23:19.754 "keep_alive_timeout_ms": 10000, 00:23:19.754 "arbitration_burst": 0, 00:23:19.754 "low_priority_weight": 0, 00:23:19.754 "medium_priority_weight": 0, 00:23:19.754 "high_priority_weight": 0, 00:23:19.754 "nvme_adminq_poll_period_us": 10000, 00:23:19.754 "nvme_ioq_poll_period_us": 0, 00:23:19.754 "io_queue_requests": 0, 00:23:19.754 "delay_cmd_submit": true, 00:23:19.754 "transport_retry_count": 4, 00:23:19.754 "bdev_retry_count": 3, 00:23:19.754 "transport_ack_timeout": 0, 00:23:19.754 "ctrlr_loss_timeout_sec": 0, 00:23:19.754 "reconnect_delay_sec": 0, 00:23:19.754 "fast_io_fail_timeout_sec": 0, 00:23:19.754 "disable_auto_failback": false, 00:23:19.754 "generate_uuids": false, 00:23:19.754 "transport_tos": 0, 00:23:19.754 "nvme_error_stat": false, 00:23:19.754 "rdma_srq_size": 0, 00:23:19.754 "io_path_stat": false, 00:23:19.754 "allow_accel_sequence": false, 00:23:19.754 "rdma_max_cq_size": 0, 00:23:19.754 "rdma_cm_event_timeout_ms": 0, 00:23:19.754 "dhchap_digests": [ 00:23:19.754 "sha256", 00:23:19.754 "sha384", 00:23:19.754 "sha512" 00:23:19.754 ], 00:23:19.754 "dhchap_dhgroups": [ 00:23:19.754 "null", 00:23:19.754 "ffdhe2048", 00:23:19.754 "ffdhe3072", 00:23:19.754 "ffdhe4096", 00:23:19.754 "ffdhe6144", 00:23:19.754 "ffdhe8192" 00:23:19.754 ] 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "bdev_nvme_set_hotplug", 00:23:19.754 "params": { 00:23:19.754 "period_us": 100000, 00:23:19.754 "enable": false 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "bdev_malloc_create", 00:23:19.754 "params": { 00:23:19.754 "name": "malloc0", 00:23:19.754 "num_blocks": 8192, 00:23:19.754 "block_size": 4096, 00:23:19.754 "physical_block_size": 4096, 00:23:19.754 "uuid": "29e19b01-676a-489f-8751-86fe138b3efe", 00:23:19.754 "optimal_io_boundary": 0 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "bdev_wait_for_examine" 00:23:19.754 } 00:23:19.754 ] 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "subsystem": "nbd", 00:23:19.754 "config": [] 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "subsystem": "scheduler", 00:23:19.754 "config": [ 00:23:19.754 { 00:23:19.754 "method": "framework_set_scheduler", 00:23:19.754 "params": { 00:23:19.754 "name": "static" 00:23:19.754 } 00:23:19.754 } 00:23:19.754 ] 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "subsystem": "nvmf", 00:23:19.754 "config": [ 00:23:19.754 { 00:23:19.754 "method": "nvmf_set_config", 00:23:19.754 "params": { 00:23:19.754 "discovery_filter": "match_any", 00:23:19.754 "admin_cmd_passthru": { 00:23:19.754 "identify_ctrlr": false 00:23:19.754 } 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_set_max_subsystems", 00:23:19.754 "params": { 00:23:19.754 "max_subsystems": 1024 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_set_crdt", 00:23:19.754 "params": { 00:23:19.754 "crdt1": 0, 00:23:19.754 "crdt2": 0, 00:23:19.754 "crdt3": 0 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_create_transport", 00:23:19.754 "params": { 00:23:19.754 "trtype": "TCP", 00:23:19.754 "max_queue_depth": 128, 00:23:19.754 "max_io_qpairs_per_ctrlr": 127, 00:23:19.754 "in_capsule_data_size": 4096, 00:23:19.754 "max_io_size": 131072, 00:23:19.754 "io_unit_size": 131072, 00:23:19.754 "max_aq_depth": 128, 00:23:19.754 "num_shared_buffers": 511, 00:23:19.754 "buf_cache_size": 4294967295, 00:23:19.754 "dif_insert_or_strip": false, 00:23:19.754 "zcopy": false, 00:23:19.754 "c2h_success": false, 00:23:19.754 "sock_priority": 0, 00:23:19.754 "abort_timeout_sec": 1, 00:23:19.754 "ack_timeout": 0, 00:23:19.754 "data_wr_pool_size": 0 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_create_subsystem", 00:23:19.754 "params": { 00:23:19.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.754 "allow_any_host": false, 00:23:19.754 "serial_number": "SPDK00000000000001", 00:23:19.754 "model_number": "SPDK bdev Controller", 00:23:19.754 "max_namespaces": 10, 00:23:19.754 "min_cntlid": 1, 00:23:19.754 "max_cntlid": 65519, 00:23:19.754 "ana_reporting": false 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_subsystem_add_host", 00:23:19.754 "params": { 00:23:19.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.754 "host": "nqn.2016-06.io.spdk:host1", 00:23:19.754 "psk": "/tmp/tmp.ksIJTUxwMO" 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_subsystem_add_ns", 00:23:19.754 "params": { 00:23:19.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.754 "namespace": { 00:23:19.754 "nsid": 1, 00:23:19.754 "bdev_name": "malloc0", 00:23:19.754 "nguid": "29E19B01676A489F875186FE138B3EFE", 00:23:19.754 "uuid": "29e19b01-676a-489f-8751-86fe138b3efe", 00:23:19.754 "no_auto_visible": false 00:23:19.754 } 00:23:19.754 } 00:23:19.754 }, 00:23:19.754 { 00:23:19.754 "method": "nvmf_subsystem_add_listener", 00:23:19.754 "params": { 00:23:19.754 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.754 "listen_address": { 00:23:19.754 "trtype": "TCP", 00:23:19.754 "adrfam": "IPv4", 00:23:19.754 "traddr": "10.0.0.2", 00:23:19.754 "trsvcid": "4420" 00:23:19.754 }, 00:23:19.754 "secure_channel": true 00:23:19.754 } 00:23:19.754 } 00:23:19.754 ] 00:23:19.754 } 00:23:19.754 ] 00:23:19.754 }' 00:23:19.754 20:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:20.013 20:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:20.013 "subsystems": [ 00:23:20.013 { 00:23:20.013 "subsystem": "keyring", 00:23:20.013 "config": [] 00:23:20.013 }, 00:23:20.014 { 00:23:20.014 "subsystem": "iobuf", 00:23:20.014 "config": [ 00:23:20.014 { 00:23:20.014 "method": "iobuf_set_options", 00:23:20.014 "params": { 00:23:20.014 "small_pool_count": 8192, 00:23:20.014 "large_pool_count": 1024, 00:23:20.014 "small_bufsize": 8192, 00:23:20.014 "large_bufsize": 135168 00:23:20.014 } 00:23:20.014 } 00:23:20.014 ] 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "subsystem": "sock", 00:23:20.014 "config": [ 00:23:20.014 { 00:23:20.014 "method": "sock_set_default_impl", 00:23:20.014 "params": { 00:23:20.014 "impl_name": "posix" 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "sock_impl_set_options", 00:23:20.014 "params": { 00:23:20.014 "impl_name": "ssl", 00:23:20.014 "recv_buf_size": 4096, 00:23:20.014 "send_buf_size": 4096, 00:23:20.014 "enable_recv_pipe": true, 00:23:20.014 "enable_quickack": false, 00:23:20.014 "enable_placement_id": 0, 00:23:20.014 "enable_zerocopy_send_server": true, 00:23:20.014 "enable_zerocopy_send_client": false, 00:23:20.014 "zerocopy_threshold": 0, 00:23:20.014 "tls_version": 0, 00:23:20.014 "enable_ktls": false 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "sock_impl_set_options", 00:23:20.014 "params": { 00:23:20.014 "impl_name": "posix", 00:23:20.014 "recv_buf_size": 2097152, 00:23:20.014 "send_buf_size": 2097152, 00:23:20.014 "enable_recv_pipe": true, 00:23:20.014 "enable_quickack": false, 00:23:20.014 "enable_placement_id": 0, 00:23:20.014 "enable_zerocopy_send_server": true, 00:23:20.014 "enable_zerocopy_send_client": false, 00:23:20.014 "zerocopy_threshold": 0, 00:23:20.014 "tls_version": 0, 00:23:20.014 "enable_ktls": false 00:23:20.014 } 00:23:20.014 } 00:23:20.014 ] 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "subsystem": "vmd", 00:23:20.014 "config": [] 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "subsystem": "accel", 00:23:20.014 "config": [ 00:23:20.014 { 00:23:20.014 "method": "accel_set_options", 00:23:20.014 "params": { 00:23:20.014 "small_cache_size": 128, 00:23:20.014 "large_cache_size": 16, 00:23:20.014 "task_count": 2048, 00:23:20.014 "sequence_count": 2048, 00:23:20.014 "buf_count": 2048 00:23:20.014 } 00:23:20.014 } 00:23:20.014 ] 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "subsystem": "bdev", 00:23:20.014 "config": [ 00:23:20.014 { 00:23:20.014 "method": "bdev_set_options", 00:23:20.014 "params": { 00:23:20.014 "bdev_io_pool_size": 65535, 00:23:20.014 "bdev_io_cache_size": 256, 00:23:20.014 "bdev_auto_examine": true, 00:23:20.014 "iobuf_small_cache_size": 128, 00:23:20.014 "iobuf_large_cache_size": 16 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "bdev_raid_set_options", 00:23:20.014 "params": { 00:23:20.014 "process_window_size_kb": 1024 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "bdev_iscsi_set_options", 00:23:20.014 "params": { 00:23:20.014 "timeout_sec": 30 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "bdev_nvme_set_options", 00:23:20.014 "params": { 00:23:20.014 "action_on_timeout": "none", 00:23:20.014 "timeout_us": 0, 00:23:20.014 "timeout_admin_us": 0, 00:23:20.014 "keep_alive_timeout_ms": 10000, 00:23:20.014 "arbitration_burst": 0, 00:23:20.014 "low_priority_weight": 0, 00:23:20.014 "medium_priority_weight": 0, 00:23:20.014 "high_priority_weight": 0, 00:23:20.014 "nvme_adminq_poll_period_us": 10000, 00:23:20.014 "nvme_ioq_poll_period_us": 0, 00:23:20.014 "io_queue_requests": 512, 00:23:20.014 "delay_cmd_submit": true, 00:23:20.014 "transport_retry_count": 4, 00:23:20.014 "bdev_retry_count": 3, 00:23:20.014 "transport_ack_timeout": 0, 00:23:20.014 "ctrlr_loss_timeout_sec": 0, 00:23:20.014 "reconnect_delay_sec": 0, 00:23:20.014 "fast_io_fail_timeout_sec": 0, 00:23:20.014 "disable_auto_failback": false, 00:23:20.014 "generate_uuids": false, 00:23:20.014 "transport_tos": 0, 00:23:20.014 "nvme_error_stat": false, 00:23:20.014 "rdma_srq_size": 0, 00:23:20.014 "io_path_stat": false, 00:23:20.014 "allow_accel_sequence": false, 00:23:20.014 "rdma_max_cq_size": 0, 00:23:20.014 "rdma_cm_event_timeout_ms": 0, 00:23:20.014 "dhchap_digests": [ 00:23:20.014 "sha256", 00:23:20.014 "sha384", 00:23:20.014 "sha512" 00:23:20.014 ], 00:23:20.014 "dhchap_dhgroups": [ 00:23:20.014 "null", 00:23:20.014 "ffdhe2048", 00:23:20.014 "ffdhe3072", 00:23:20.014 "ffdhe4096", 00:23:20.014 "ffdhe6144", 00:23:20.014 "ffdhe8192" 00:23:20.014 ] 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "bdev_nvme_attach_controller", 00:23:20.014 "params": { 00:23:20.014 "name": "TLSTEST", 00:23:20.014 "trtype": "TCP", 00:23:20.014 "adrfam": "IPv4", 00:23:20.014 "traddr": "10.0.0.2", 00:23:20.014 "trsvcid": "4420", 00:23:20.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.014 "prchk_reftag": false, 00:23:20.014 "prchk_guard": false, 00:23:20.014 "ctrlr_loss_timeout_sec": 0, 00:23:20.014 "reconnect_delay_sec": 0, 00:23:20.014 "fast_io_fail_timeout_sec": 0, 00:23:20.014 "psk": "/tmp/tmp.ksIJTUxwMO", 00:23:20.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.014 "hdgst": false, 00:23:20.014 "ddgst": false 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "bdev_nvme_set_hotplug", 00:23:20.014 "params": { 00:23:20.014 "period_us": 100000, 00:23:20.014 "enable": false 00:23:20.014 } 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "method": "bdev_wait_for_examine" 00:23:20.014 } 00:23:20.014 ] 00:23:20.014 }, 00:23:20.014 { 00:23:20.014 "subsystem": "nbd", 00:23:20.014 "config": [] 00:23:20.014 } 00:23:20.014 ] 00:23:20.014 }' 00:23:20.014 20:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3236905 00:23:20.014 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3236905 ']' 00:23:20.014 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3236905 00:23:20.014 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3236905 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3236905' 00:23:20.273 killing process with pid 3236905 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3236905 00:23:20.273 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.273 00:23:20.273 Latency(us) 00:23:20.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.273 =================================================================================================================== 00:23:20.273 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.273 [2024-07-13 20:11:07.700780] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3236905 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3236626 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3236626 ']' 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3236626 00:23:20.273 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3236626 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3236626' 00:23:20.531 killing process with pid 3236626 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3236626 00:23:20.531 [2024-07-13 20:11:07.957607] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:20.531 20:11:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3236626 00:23:20.791 20:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:20.791 20:11:08 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:20.791 "subsystems": [ 00:23:20.791 { 00:23:20.791 "subsystem": "keyring", 00:23:20.791 "config": [] 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "subsystem": "iobuf", 00:23:20.791 "config": [ 00:23:20.791 { 00:23:20.791 "method": "iobuf_set_options", 00:23:20.791 "params": { 00:23:20.791 "small_pool_count": 8192, 00:23:20.791 "large_pool_count": 1024, 00:23:20.791 "small_bufsize": 8192, 00:23:20.791 "large_bufsize": 135168 00:23:20.791 } 00:23:20.791 } 00:23:20.791 ] 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "subsystem": "sock", 00:23:20.791 "config": [ 00:23:20.791 { 00:23:20.791 "method": "sock_set_default_impl", 00:23:20.791 "params": { 00:23:20.791 "impl_name": "posix" 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "sock_impl_set_options", 00:23:20.791 "params": { 00:23:20.791 "impl_name": "ssl", 00:23:20.791 "recv_buf_size": 4096, 00:23:20.791 "send_buf_size": 4096, 00:23:20.791 "enable_recv_pipe": true, 00:23:20.791 "enable_quickack": false, 00:23:20.791 "enable_placement_id": 0, 00:23:20.791 "enable_zerocopy_send_server": true, 00:23:20.791 "enable_zerocopy_send_client": false, 00:23:20.791 "zerocopy_threshold": 0, 00:23:20.791 "tls_version": 0, 00:23:20.791 "enable_ktls": false 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "sock_impl_set_options", 00:23:20.791 "params": { 00:23:20.791 "impl_name": "posix", 00:23:20.791 "recv_buf_size": 2097152, 00:23:20.791 "send_buf_size": 2097152, 00:23:20.791 "enable_recv_pipe": true, 00:23:20.791 "enable_quickack": false, 00:23:20.791 "enable_placement_id": 0, 00:23:20.791 "enable_zerocopy_send_server": true, 00:23:20.791 "enable_zerocopy_send_client": false, 00:23:20.791 "zerocopy_threshold": 0, 00:23:20.791 "tls_version": 0, 00:23:20.791 "enable_ktls": false 00:23:20.791 } 00:23:20.791 } 00:23:20.791 ] 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "subsystem": "vmd", 00:23:20.791 "config": [] 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "subsystem": "accel", 00:23:20.791 "config": [ 00:23:20.791 { 00:23:20.791 "method": "accel_set_options", 00:23:20.791 "params": { 00:23:20.791 "small_cache_size": 128, 00:23:20.791 "large_cache_size": 16, 00:23:20.791 "task_count": 2048, 00:23:20.791 "sequence_count": 2048, 00:23:20.791 "buf_count": 2048 00:23:20.791 } 00:23:20.791 } 00:23:20.791 ] 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "subsystem": "bdev", 00:23:20.791 "config": [ 00:23:20.791 { 00:23:20.791 "method": "bdev_set_options", 00:23:20.791 "params": { 00:23:20.791 "bdev_io_pool_size": 65535, 00:23:20.791 "bdev_io_cache_size": 256, 00:23:20.791 "bdev_auto_examine": true, 00:23:20.791 "iobuf_small_cache_size": 128, 00:23:20.791 "iobuf_large_cache_size": 16 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "bdev_raid_set_options", 00:23:20.791 "params": { 00:23:20.791 "process_window_size_kb": 1024 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "bdev_iscsi_set_options", 00:23:20.791 "params": { 00:23:20.791 "timeout_sec": 30 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "bdev_nvme_set_options", 00:23:20.791 "params": { 00:23:20.791 "action_on_timeout": "none", 00:23:20.791 "timeout_us": 0, 00:23:20.791 "timeout_admin_us": 0, 00:23:20.791 "keep_alive_timeout_ms": 10000, 00:23:20.791 "arbitration_burst": 0, 00:23:20.791 "low_priority_weight": 0, 00:23:20.791 "medium_priority_weight": 0, 00:23:20.791 "high_priority_weight": 0, 00:23:20.791 "nvme_adminq_poll_period_us": 10000, 00:23:20.791 "nvme_ioq_poll_period_us": 0, 00:23:20.791 "io_queue_requests": 0, 00:23:20.791 "delay_cmd_submit": true, 00:23:20.791 "transport_retry_count": 4, 00:23:20.791 "bdev_retry_count": 3, 00:23:20.791 "transport_ack_timeout": 0, 00:23:20.791 "ctrlr_loss_timeout_sec": 0, 00:23:20.791 "reconnect_delay_sec": 0, 00:23:20.791 "fast_io_fail_timeout_sec": 0, 00:23:20.791 "disable_auto_failback": false, 00:23:20.791 "generate_uuids": false, 00:23:20.791 "transport_tos": 0, 00:23:20.791 "nvme_error_stat": false, 00:23:20.791 "rdma_srq_size": 0, 00:23:20.791 "io_path_stat": false, 00:23:20.791 "allow_accel_sequence": false, 00:23:20.791 "rdma_max_cq_size": 0, 00:23:20.791 "rdma_cm_event_timeout_ms": 0, 00:23:20.791 "dhchap_digests": [ 00:23:20.791 "sha256", 00:23:20.791 "sha384", 00:23:20.791 "sha512" 00:23:20.791 ], 00:23:20.791 "dhchap_dhgroups": [ 00:23:20.791 "null", 00:23:20.791 "ffdhe2048", 00:23:20.791 "ffdhe3072", 00:23:20.791 "ffdhe4096", 00:23:20.791 "ffdhe 20:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.791 6144", 00:23:20.791 "ffdhe8192" 00:23:20.791 ] 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "bdev_nvme_set_hotplug", 00:23:20.791 "params": { 00:23:20.791 "period_us": 100000, 00:23:20.791 "enable": false 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "bdev_malloc_create", 00:23:20.791 "params": { 00:23:20.791 "name": "malloc0", 00:23:20.791 "num_blocks": 8192, 00:23:20.791 "block_size": 4096, 00:23:20.791 "physical_block_size": 4096, 00:23:20.791 "uuid": "29e19b01-676a-489f-8751-86fe138b3efe", 00:23:20.791 "optimal_io_boundary": 0 00:23:20.791 } 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "method": "bdev_wait_for_examine" 00:23:20.791 } 00:23:20.791 ] 00:23:20.791 }, 00:23:20.791 { 00:23:20.791 "subsystem": "nbd", 00:23:20.791 "config": [] 00:23:20.791 }, 00:23:20.791 { 00:23:20.792 "subsystem": "scheduler", 00:23:20.792 "config": [ 00:23:20.792 { 00:23:20.792 "method": "framework_set_scheduler", 00:23:20.792 "params": { 00:23:20.792 "name": "static" 00:23:20.792 } 00:23:20.792 } 00:23:20.792 ] 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "subsystem": "nvmf", 00:23:20.792 "config": [ 00:23:20.792 { 00:23:20.792 "method": "nvmf_set_config", 00:23:20.792 "params": { 00:23:20.792 "discovery_filter": "match_any", 00:23:20.792 "admin_cmd_passthru": { 00:23:20.792 "identify_ctrlr": false 00:23:20.792 } 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_set_max_subsystems", 00:23:20.792 "params": { 00:23:20.792 "max_subsystems": 1024 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_set_crdt", 00:23:20.792 "params": { 00:23:20.792 "crdt1": 0, 00:23:20.792 "crdt2": 0, 00:23:20.792 "crdt3": 0 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_create_transport", 00:23:20.792 "params": { 00:23:20.792 "trtype": "TCP", 00:23:20.792 "max_queue_depth": 128, 00:23:20.792 "max_io_qpairs_per_ctrlr": 127, 00:23:20.792 "in_capsule_data_size": 4096, 00:23:20.792 "max_io_size": 131072, 00:23:20.792 "io_unit_size": 131072, 00:23:20.792 "max_aq_depth": 128, 00:23:20.792 "num_shared_buffers": 511, 00:23:20.792 "buf_cache_size": 4294967295, 00:23:20.792 "dif_insert_or_strip": false, 00:23:20.792 "zcopy": false, 00:23:20.792 "c2h_success": false, 00:23:20.792 "sock_priority": 0, 00:23:20.792 "abort_timeout_sec": 1, 00:23:20.792 "ack_timeout": 0, 00:23:20.792 "data_wr_pool_size": 0 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_create_subsystem", 00:23:20.792 "params": { 00:23:20.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.792 "allow_any_host": false, 00:23:20.792 "serial_number": "SPDK00000000000001", 00:23:20.792 "model_number": "SPDK bdev Controller", 00:23:20.792 "max_namespaces": 10, 00:23:20.792 "min_cntlid": 1, 00:23:20.792 "max_cntlid": 65519, 00:23:20.792 "ana_reporting": false 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_subsystem_add_host", 00:23:20.792 "params": { 00:23:20.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.792 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.792 "psk": "/tmp/tmp.ksIJTUxwMO" 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_subsystem_add_ns", 00:23:20.792 "params": { 00:23:20.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.792 "namespace": { 00:23:20.792 "nsid": 1, 00:23:20.792 "bdev_name": "malloc0", 00:23:20.792 "nguid": "29E19B01676A489F875186FE138B3EFE", 00:23:20.792 "uuid": "29e19b01-676a-489f-8751-86fe138b3efe", 00:23:20.792 "no_auto_visible": false 00:23:20.792 } 00:23:20.792 } 00:23:20.792 }, 00:23:20.792 { 00:23:20.792 "method": "nvmf_subsystem_add_listener", 00:23:20.792 "params": { 00:23:20.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.792 "listen_address": { 00:23:20.792 "trtype": "TCP", 00:23:20.792 "adrfam": "IPv4", 00:23:20.792 "traddr": "10.0.0.2", 00:23:20.792 "trsvcid": "4420" 00:23:20.792 }, 00:23:20.792 "secure_channel": true 00:23:20.792 } 00:23:20.792 } 00:23:20.792 ] 00:23:20.792 } 00:23:20.792 ] 00:23:20.792 }' 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3237168 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3237168 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3237168 ']' 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:20.792 20:11:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.792 [2024-07-13 20:11:08.253070] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:20.792 [2024-07-13 20:11:08.253165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.792 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.792 [2024-07-13 20:11:08.322969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.792 [2024-07-13 20:11:08.410557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.792 [2024-07-13 20:11:08.410621] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.792 [2024-07-13 20:11:08.410650] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.792 [2024-07-13 20:11:08.410662] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.792 [2024-07-13 20:11:08.410672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.792 [2024-07-13 20:11:08.410753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.052 [2024-07-13 20:11:08.647729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.052 [2024-07-13 20:11:08.663688] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:21.052 [2024-07-13 20:11:08.679734] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.052 [2024-07-13 20:11:08.687018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3237244 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3237244 /var/tmp/bdevperf.sock 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3237244 ']' 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.618 20:11:09 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:21.618 "subsystems": [ 00:23:21.618 { 00:23:21.618 "subsystem": "keyring", 00:23:21.618 "config": [] 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "subsystem": "iobuf", 00:23:21.618 "config": [ 00:23:21.618 { 00:23:21.618 "method": "iobuf_set_options", 00:23:21.618 "params": { 00:23:21.618 "small_pool_count": 8192, 00:23:21.618 "large_pool_count": 1024, 00:23:21.618 "small_bufsize": 8192, 00:23:21.618 "large_bufsize": 135168 00:23:21.618 } 00:23:21.618 } 00:23:21.618 ] 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "subsystem": "sock", 00:23:21.618 "config": [ 00:23:21.618 { 00:23:21.618 "method": "sock_set_default_impl", 00:23:21.618 "params": { 00:23:21.618 "impl_name": "posix" 00:23:21.618 } 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "method": "sock_impl_set_options", 00:23:21.618 "params": { 00:23:21.618 "impl_name": "ssl", 00:23:21.618 "recv_buf_size": 4096, 00:23:21.618 "send_buf_size": 4096, 00:23:21.618 "enable_recv_pipe": true, 00:23:21.618 "enable_quickack": false, 00:23:21.618 "enable_placement_id": 0, 00:23:21.618 "enable_zerocopy_send_server": true, 00:23:21.618 "enable_zerocopy_send_client": false, 00:23:21.618 "zerocopy_threshold": 0, 00:23:21.618 "tls_version": 0, 00:23:21.618 "enable_ktls": false 00:23:21.618 } 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "method": "sock_impl_set_options", 00:23:21.618 "params": { 00:23:21.618 "impl_name": "posix", 00:23:21.618 "recv_buf_size": 2097152, 00:23:21.618 "send_buf_size": 2097152, 00:23:21.618 "enable_recv_pipe": true, 00:23:21.618 "enable_quickack": false, 00:23:21.618 "enable_placement_id": 0, 00:23:21.618 "enable_zerocopy_send_server": true, 00:23:21.618 "enable_zerocopy_send_client": false, 00:23:21.618 "zerocopy_threshold": 0, 00:23:21.618 "tls_version": 0, 00:23:21.618 "enable_ktls": false 00:23:21.618 } 00:23:21.618 } 00:23:21.618 ] 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "subsystem": "vmd", 00:23:21.618 "config": [] 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "subsystem": "accel", 00:23:21.618 "config": [ 00:23:21.618 { 00:23:21.618 "method": "accel_set_options", 00:23:21.618 "params": { 00:23:21.618 "small_cache_size": 128, 00:23:21.618 "large_cache_size": 16, 00:23:21.618 "task_count": 2048, 00:23:21.618 "sequence_count": 2048, 00:23:21.618 "buf_count": 2048 00:23:21.618 } 00:23:21.618 } 00:23:21.618 ] 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "subsystem": "bdev", 00:23:21.618 "config": [ 00:23:21.618 { 00:23:21.618 "method": "bdev_set_options", 00:23:21.618 "params": { 00:23:21.618 "bdev_io_pool_size": 65535, 00:23:21.618 "bdev_io_cache_size": 256, 00:23:21.618 "bdev_auto_examine": true, 00:23:21.618 "iobuf_small_cache_size": 128, 00:23:21.618 "iobuf_large_cache_size": 16 00:23:21.618 } 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "method": "bdev_raid_set_options", 00:23:21.618 "params": { 00:23:21.618 "process_window_size_kb": 1024 00:23:21.618 } 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "method": "bdev_iscsi_set_options", 00:23:21.618 "params": { 00:23:21.618 "timeout_sec": 30 00:23:21.618 } 00:23:21.618 }, 00:23:21.618 { 00:23:21.618 "method": "bdev_nvme_set_options", 00:23:21.618 "params": { 00:23:21.619 "action_on_timeout": "none", 00:23:21.619 "timeout_us": 0, 00:23:21.619 "timeout_admin_us": 0, 00:23:21.619 "keep_alive_timeout_ms": 10000, 00:23:21.619 "arbitration_burst": 0, 00:23:21.619 "low_priority_weight": 0, 00:23:21.619 "medium_priority_weight": 0, 00:23:21.619 "high_priority_weight": 0, 00:23:21.619 "nvme_adminq_poll_period_us": 10000, 00:23:21.619 "nvme_ioq_poll_period_us": 0, 00:23:21.619 "io_queue_requests": 512, 00:23:21.619 "delay_cmd_submit": true, 00:23:21.619 "transport_retry_count": 4, 00:23:21.619 "bdev_retry_count": 3, 00:23:21.619 "transport_ack_timeout": 0, 00:23:21.619 "ctrlr_loss_timeout_sec": 0, 00:23:21.619 "reconnect_delay_sec": 0, 00:23:21.619 "fast_io_fail_timeout_sec": 0, 00:23:21.619 "disable_auto_failback": false, 00:23:21.619 "generate_uuids": false, 00:23:21.619 "transport_tos": 0, 00:23:21.619 "nvme_error_stat": false, 00:23:21.619 "rdma_srq_size": 0, 00:23:21.619 "io_path_stat": false, 00:23:21.619 "allow_accel_sequence": false, 00:23:21.619 "rdma_max_cq_size": 0, 00:23:21.619 "rdma_cm_event_timeout_ms": 0, 00:23:21.619 "dhchap_digests": [ 00:23:21.619 "sha256", 00:23:21.619 "sha384", 00:23:21.619 "sha512" 00:23:21.619 ], 00:23:21.619 "dhchap_dhgroups": [ 00:23:21.619 "null", 00:23:21.619 "ffdhe2048", 00:23:21.619 "ffdhe3072", 00:23:21.619 "ffdhe4096", 00:23:21.619 "ffdhe6144", 00:23:21.619 "ffdhe8192" 00:23:21.619 ] 00:23:21.619 } 00:23:21.619 }, 00:23:21.619 { 00:23:21.619 "method": "bdev_nvme_attach_controller", 00:23:21.619 "params": { 00:23:21.619 "name": "TLSTEST", 00:23:21.619 "trtype": "TCP", 00:23:21.619 "adrfam": "IPv4", 00:23:21.619 "traddr": "10.0.0.2", 00:23:21.619 "trsvcid": "4420", 00:23:21.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.619 "prchk_reftag": false, 00:23:21.619 "prchk_guard": false, 00:23:21.619 "ctrlr_loss_timeout_sec": 0, 00:23:21.619 "reconnect_delay_sec": 0, 00:23:21.619 "fast_io_fail_timeout_sec": 0, 00:23:21.619 "psk": "/tmp/tmp.ksIJTUxwMO", 00:23:21.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.619 "hdgst": false, 00:23:21.619 "ddgst": false 00:23:21.619 } 00:23:21.619 }, 00:23:21.619 { 00:23:21.619 "method": "bdev_nvme_set_hotplug", 00:23:21.619 "params": { 00:23:21.619 "period_us": 100000, 00:23:21.619 "enable": false 00:23:21.619 } 00:23:21.619 }, 00:23:21.619 { 00:23:21.619 "method": "bdev_wait_for_examine" 00:23:21.619 } 00:23:21.619 ] 00:23:21.619 }, 00:23:21.619 { 00:23:21.619 "subsystem": "nbd", 00:23:21.619 "config": [] 00:23:21.619 } 00:23:21.619 ] 00:23:21.619 }' 00:23:21.878 [2024-07-13 20:11:09.314441] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:21.878 [2024-07-13 20:11:09.314521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237244 ] 00:23:21.878 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.878 [2024-07-13 20:11:09.378405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.878 [2024-07-13 20:11:09.470249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.139 [2024-07-13 20:11:09.640650] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.139 [2024-07-13 20:11:09.640795] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:22.707 20:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:22.707 20:11:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:22.707 20:11:10 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:22.966 Running I/O for 10 seconds... 00:23:32.945 00:23:32.945 Latency(us) 00:23:32.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.945 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.945 Verification LBA range: start 0x0 length 0x2000 00:23:32.945 TLSTESTn1 : 10.06 1195.42 4.67 0.00 0.00 106887.88 13301.38 100973.99 00:23:32.945 =================================================================================================================== 00:23:32.945 Total : 1195.42 4.67 0.00 0.00 106887.88 13301.38 100973.99 00:23:32.945 0 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3237244 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3237244 ']' 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3237244 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3237244 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3237244' 00:23:32.945 killing process with pid 3237244 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3237244 00:23:32.945 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.945 00:23:32.945 Latency(us) 00:23:32.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.945 =================================================================================================================== 00:23:32.945 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.945 [2024-07-13 20:11:20.508217] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:32.945 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3237244 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3237168 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3237168 ']' 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3237168 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3237168 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3237168' 00:23:33.205 killing process with pid 3237168 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3237168 00:23:33.205 [2024-07-13 20:11:20.734983] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:33.205 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3237168 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3238659 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3238659 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3238659 ']' 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.464 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:33.465 20:11:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.465 [2024-07-13 20:11:21.038210] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:33.465 [2024-07-13 20:11:21.038300] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.465 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.465 [2024-07-13 20:11:21.106693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.723 [2024-07-13 20:11:21.193042] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.723 [2024-07-13 20:11:21.193105] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.723 [2024-07-13 20:11:21.193122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.723 [2024-07-13 20:11:21.193135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.723 [2024-07-13 20:11:21.193147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.723 [2024-07-13 20:11:21.193179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ksIJTUxwMO 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.ksIJTUxwMO 00:23:33.723 20:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.982 [2024-07-13 20:11:21.581408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.982 20:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.240 20:11:21 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.498 [2024-07-13 20:11:22.086772] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.498 [2024-07-13 20:11:22.087017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.498 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:34.756 malloc0 00:23:34.756 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ksIJTUxwMO 00:23:35.326 [2024-07-13 20:11:22.903606] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3238934 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3238934 /var/tmp/bdevperf.sock 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3238934 ']' 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:35.326 20:11:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.326 [2024-07-13 20:11:22.967652] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:35.326 [2024-07-13 20:11:22.967726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238934 ] 00:23:35.585 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.585 [2024-07-13 20:11:23.029450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.585 [2024-07-13 20:11:23.120130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.585 20:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:35.585 20:11:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:35.585 20:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ksIJTUxwMO 00:23:36.150 20:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:36.150 [2024-07-13 20:11:23.757505] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.409 nvme0n1 00:23:36.409 20:11:23 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.409 Running I/O for 1 seconds... 00:23:37.347 00:23:37.347 Latency(us) 00:23:37.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.347 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:37.347 Verification LBA range: start 0x0 length 0x2000 00:23:37.347 nvme0n1 : 1.06 2002.68 7.82 0.00 0.00 62408.19 6310.87 90876.59 00:23:37.347 =================================================================================================================== 00:23:37.347 Total : 2002.68 7.82 0.00 0.00 62408.19 6310.87 90876.59 00:23:37.347 0 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3238934 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3238934 ']' 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3238934 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3238934 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3238934' 00:23:37.608 killing process with pid 3238934 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3238934 00:23:37.608 Received shutdown signal, test time was about 1.000000 seconds 00:23:37.608 00:23:37.608 Latency(us) 00:23:37.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.608 =================================================================================================================== 00:23:37.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.608 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3238934 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3238659 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3238659 ']' 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3238659 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3238659 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3238659' 00:23:37.868 killing process with pid 3238659 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3238659 00:23:37.868 [2024-07-13 20:11:25.300277] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:37.868 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3238659 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3239228 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3239228 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3239228 ']' 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.129 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.129 [2024-07-13 20:11:25.595622] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:38.129 [2024-07-13 20:11:25.595701] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.129 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.129 [2024-07-13 20:11:25.657446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.129 [2024-07-13 20:11:25.740066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.129 [2024-07-13 20:11:25.740120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.129 [2024-07-13 20:11:25.740135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.129 [2024-07-13 20:11:25.740162] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.129 [2024-07-13 20:11:25.740180] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.129 [2024-07-13 20:11:25.740221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.388 [2024-07-13 20:11:25.887070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.388 malloc0 00:23:38.388 [2024-07-13 20:11:25.919203] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.388 [2024-07-13 20:11:25.919444] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3239248 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3239248 /var/tmp/bdevperf.sock 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3239248 ']' 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:38.388 20:11:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.388 [2024-07-13 20:11:25.990269] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:38.388 [2024-07-13 20:11:25.990339] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239248 ] 00:23:38.388 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.676 [2024-07-13 20:11:26.048454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.676 [2024-07-13 20:11:26.135438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.676 20:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:38.676 20:11:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:38.676 20:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ksIJTUxwMO 00:23:38.935 20:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:39.192 [2024-07-13 20:11:26.721098] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.192 nvme0n1 00:23:39.192 20:11:26 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.451 Running I/O for 1 seconds... 00:23:40.390 00:23:40.390 Latency(us) 00:23:40.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.390 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:40.390 Verification LBA range: start 0x0 length 0x2000 00:23:40.390 nvme0n1 : 1.05 2015.19 7.87 0.00 0.00 62158.19 6844.87 91653.31 00:23:40.390 =================================================================================================================== 00:23:40.390 Total : 2015.19 7.87 0.00 0.00 62158.19 6844.87 91653.31 00:23:40.390 0 00:23:40.390 20:11:27 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:40.390 20:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.390 20:11:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.648 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.648 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:40.648 "subsystems": [ 00:23:40.648 { 00:23:40.648 "subsystem": "keyring", 00:23:40.648 "config": [ 00:23:40.648 { 00:23:40.648 "method": "keyring_file_add_key", 00:23:40.648 "params": { 00:23:40.648 "name": "key0", 00:23:40.648 "path": "/tmp/tmp.ksIJTUxwMO" 00:23:40.648 } 00:23:40.648 } 00:23:40.648 ] 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "subsystem": "iobuf", 00:23:40.648 "config": [ 00:23:40.648 { 00:23:40.648 "method": "iobuf_set_options", 00:23:40.648 "params": { 00:23:40.648 "small_pool_count": 8192, 00:23:40.648 "large_pool_count": 1024, 00:23:40.648 "small_bufsize": 8192, 00:23:40.648 "large_bufsize": 135168 00:23:40.648 } 00:23:40.648 } 00:23:40.648 ] 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "subsystem": "sock", 00:23:40.648 "config": [ 00:23:40.648 { 00:23:40.648 "method": "sock_set_default_impl", 00:23:40.648 "params": { 00:23:40.648 "impl_name": "posix" 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "sock_impl_set_options", 00:23:40.648 "params": { 00:23:40.648 "impl_name": "ssl", 00:23:40.648 "recv_buf_size": 4096, 00:23:40.648 "send_buf_size": 4096, 00:23:40.648 "enable_recv_pipe": true, 00:23:40.648 "enable_quickack": false, 00:23:40.648 "enable_placement_id": 0, 00:23:40.648 "enable_zerocopy_send_server": true, 00:23:40.648 "enable_zerocopy_send_client": false, 00:23:40.648 "zerocopy_threshold": 0, 00:23:40.648 "tls_version": 0, 00:23:40.648 "enable_ktls": false 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "sock_impl_set_options", 00:23:40.648 "params": { 00:23:40.648 "impl_name": "posix", 00:23:40.648 "recv_buf_size": 2097152, 00:23:40.648 "send_buf_size": 2097152, 00:23:40.648 "enable_recv_pipe": true, 00:23:40.648 "enable_quickack": false, 00:23:40.648 "enable_placement_id": 0, 00:23:40.648 "enable_zerocopy_send_server": true, 00:23:40.648 "enable_zerocopy_send_client": false, 00:23:40.648 "zerocopy_threshold": 0, 00:23:40.648 "tls_version": 0, 00:23:40.648 "enable_ktls": false 00:23:40.648 } 00:23:40.648 } 00:23:40.648 ] 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "subsystem": "vmd", 00:23:40.648 "config": [] 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "subsystem": "accel", 00:23:40.648 "config": [ 00:23:40.648 { 00:23:40.648 "method": "accel_set_options", 00:23:40.648 "params": { 00:23:40.648 "small_cache_size": 128, 00:23:40.648 "large_cache_size": 16, 00:23:40.648 "task_count": 2048, 00:23:40.648 "sequence_count": 2048, 00:23:40.648 "buf_count": 2048 00:23:40.648 } 00:23:40.648 } 00:23:40.648 ] 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "subsystem": "bdev", 00:23:40.648 "config": [ 00:23:40.648 { 00:23:40.648 "method": "bdev_set_options", 00:23:40.648 "params": { 00:23:40.648 "bdev_io_pool_size": 65535, 00:23:40.648 "bdev_io_cache_size": 256, 00:23:40.648 "bdev_auto_examine": true, 00:23:40.648 "iobuf_small_cache_size": 128, 00:23:40.648 "iobuf_large_cache_size": 16 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "bdev_raid_set_options", 00:23:40.648 "params": { 00:23:40.648 "process_window_size_kb": 1024 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "bdev_iscsi_set_options", 00:23:40.648 "params": { 00:23:40.648 "timeout_sec": 30 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "bdev_nvme_set_options", 00:23:40.648 "params": { 00:23:40.648 "action_on_timeout": "none", 00:23:40.648 "timeout_us": 0, 00:23:40.648 "timeout_admin_us": 0, 00:23:40.648 "keep_alive_timeout_ms": 10000, 00:23:40.648 "arbitration_burst": 0, 00:23:40.648 "low_priority_weight": 0, 00:23:40.648 "medium_priority_weight": 0, 00:23:40.648 "high_priority_weight": 0, 00:23:40.648 "nvme_adminq_poll_period_us": 10000, 00:23:40.648 "nvme_ioq_poll_period_us": 0, 00:23:40.648 "io_queue_requests": 0, 00:23:40.648 "delay_cmd_submit": true, 00:23:40.648 "transport_retry_count": 4, 00:23:40.648 "bdev_retry_count": 3, 00:23:40.648 "transport_ack_timeout": 0, 00:23:40.648 "ctrlr_loss_timeout_sec": 0, 00:23:40.648 "reconnect_delay_sec": 0, 00:23:40.648 "fast_io_fail_timeout_sec": 0, 00:23:40.648 "disable_auto_failback": false, 00:23:40.648 "generate_uuids": false, 00:23:40.648 "transport_tos": 0, 00:23:40.648 "nvme_error_stat": false, 00:23:40.648 "rdma_srq_size": 0, 00:23:40.648 "io_path_stat": false, 00:23:40.648 "allow_accel_sequence": false, 00:23:40.648 "rdma_max_cq_size": 0, 00:23:40.648 "rdma_cm_event_timeout_ms": 0, 00:23:40.648 "dhchap_digests": [ 00:23:40.648 "sha256", 00:23:40.648 "sha384", 00:23:40.648 "sha512" 00:23:40.648 ], 00:23:40.648 "dhchap_dhgroups": [ 00:23:40.648 "null", 00:23:40.648 "ffdhe2048", 00:23:40.648 "ffdhe3072", 00:23:40.648 "ffdhe4096", 00:23:40.648 "ffdhe6144", 00:23:40.648 "ffdhe8192" 00:23:40.648 ] 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "bdev_nvme_set_hotplug", 00:23:40.648 "params": { 00:23:40.648 "period_us": 100000, 00:23:40.648 "enable": false 00:23:40.648 } 00:23:40.648 }, 00:23:40.648 { 00:23:40.648 "method": "bdev_malloc_create", 00:23:40.648 "params": { 00:23:40.649 "name": "malloc0", 00:23:40.649 "num_blocks": 8192, 00:23:40.649 "block_size": 4096, 00:23:40.649 "physical_block_size": 4096, 00:23:40.649 "uuid": "f643e416-5c02-475c-88aa-42dfa523b88b", 00:23:40.649 "optimal_io_boundary": 0 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "bdev_wait_for_examine" 00:23:40.649 } 00:23:40.649 ] 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "subsystem": "nbd", 00:23:40.649 "config": [] 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "subsystem": "scheduler", 00:23:40.649 "config": [ 00:23:40.649 { 00:23:40.649 "method": "framework_set_scheduler", 00:23:40.649 "params": { 00:23:40.649 "name": "static" 00:23:40.649 } 00:23:40.649 } 00:23:40.649 ] 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "subsystem": "nvmf", 00:23:40.649 "config": [ 00:23:40.649 { 00:23:40.649 "method": "nvmf_set_config", 00:23:40.649 "params": { 00:23:40.649 "discovery_filter": "match_any", 00:23:40.649 "admin_cmd_passthru": { 00:23:40.649 "identify_ctrlr": false 00:23:40.649 } 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_set_max_subsystems", 00:23:40.649 "params": { 00:23:40.649 "max_subsystems": 1024 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_set_crdt", 00:23:40.649 "params": { 00:23:40.649 "crdt1": 0, 00:23:40.649 "crdt2": 0, 00:23:40.649 "crdt3": 0 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_create_transport", 00:23:40.649 "params": { 00:23:40.649 "trtype": "TCP", 00:23:40.649 "max_queue_depth": 128, 00:23:40.649 "max_io_qpairs_per_ctrlr": 127, 00:23:40.649 "in_capsule_data_size": 4096, 00:23:40.649 "max_io_size": 131072, 00:23:40.649 "io_unit_size": 131072, 00:23:40.649 "max_aq_depth": 128, 00:23:40.649 "num_shared_buffers": 511, 00:23:40.649 "buf_cache_size": 4294967295, 00:23:40.649 "dif_insert_or_strip": false, 00:23:40.649 "zcopy": false, 00:23:40.649 "c2h_success": false, 00:23:40.649 "sock_priority": 0, 00:23:40.649 "abort_timeout_sec": 1, 00:23:40.649 "ack_timeout": 0, 00:23:40.649 "data_wr_pool_size": 0 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_create_subsystem", 00:23:40.649 "params": { 00:23:40.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.649 "allow_any_host": false, 00:23:40.649 "serial_number": "00000000000000000000", 00:23:40.649 "model_number": "SPDK bdev Controller", 00:23:40.649 "max_namespaces": 32, 00:23:40.649 "min_cntlid": 1, 00:23:40.649 "max_cntlid": 65519, 00:23:40.649 "ana_reporting": false 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_subsystem_add_host", 00:23:40.649 "params": { 00:23:40.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.649 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.649 "psk": "key0" 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_subsystem_add_ns", 00:23:40.649 "params": { 00:23:40.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.649 "namespace": { 00:23:40.649 "nsid": 1, 00:23:40.649 "bdev_name": "malloc0", 00:23:40.649 "nguid": "F643E4165C02475C88AA42DFA523B88B", 00:23:40.649 "uuid": "f643e416-5c02-475c-88aa-42dfa523b88b", 00:23:40.649 "no_auto_visible": false 00:23:40.649 } 00:23:40.649 } 00:23:40.649 }, 00:23:40.649 { 00:23:40.649 "method": "nvmf_subsystem_add_listener", 00:23:40.649 "params": { 00:23:40.649 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.649 "listen_address": { 00:23:40.649 "trtype": "TCP", 00:23:40.649 "adrfam": "IPv4", 00:23:40.649 "traddr": "10.0.0.2", 00:23:40.649 "trsvcid": "4420" 00:23:40.649 }, 00:23:40.649 "secure_channel": true 00:23:40.649 } 00:23:40.649 } 00:23:40.649 ] 00:23:40.649 } 00:23:40.649 ] 00:23:40.649 }' 00:23:40.649 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:40.907 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:40.907 "subsystems": [ 00:23:40.907 { 00:23:40.907 "subsystem": "keyring", 00:23:40.907 "config": [ 00:23:40.907 { 00:23:40.907 "method": "keyring_file_add_key", 00:23:40.907 "params": { 00:23:40.907 "name": "key0", 00:23:40.907 "path": "/tmp/tmp.ksIJTUxwMO" 00:23:40.907 } 00:23:40.907 } 00:23:40.907 ] 00:23:40.907 }, 00:23:40.907 { 00:23:40.907 "subsystem": "iobuf", 00:23:40.907 "config": [ 00:23:40.907 { 00:23:40.907 "method": "iobuf_set_options", 00:23:40.907 "params": { 00:23:40.907 "small_pool_count": 8192, 00:23:40.907 "large_pool_count": 1024, 00:23:40.907 "small_bufsize": 8192, 00:23:40.907 "large_bufsize": 135168 00:23:40.907 } 00:23:40.907 } 00:23:40.907 ] 00:23:40.907 }, 00:23:40.907 { 00:23:40.907 "subsystem": "sock", 00:23:40.907 "config": [ 00:23:40.907 { 00:23:40.907 "method": "sock_set_default_impl", 00:23:40.907 "params": { 00:23:40.907 "impl_name": "posix" 00:23:40.907 } 00:23:40.907 }, 00:23:40.907 { 00:23:40.907 "method": "sock_impl_set_options", 00:23:40.907 "params": { 00:23:40.907 "impl_name": "ssl", 00:23:40.907 "recv_buf_size": 4096, 00:23:40.907 "send_buf_size": 4096, 00:23:40.907 "enable_recv_pipe": true, 00:23:40.907 "enable_quickack": false, 00:23:40.907 "enable_placement_id": 0, 00:23:40.907 "enable_zerocopy_send_server": true, 00:23:40.907 "enable_zerocopy_send_client": false, 00:23:40.907 "zerocopy_threshold": 0, 00:23:40.907 "tls_version": 0, 00:23:40.907 "enable_ktls": false 00:23:40.907 } 00:23:40.907 }, 00:23:40.907 { 00:23:40.907 "method": "sock_impl_set_options", 00:23:40.907 "params": { 00:23:40.907 "impl_name": "posix", 00:23:40.907 "recv_buf_size": 2097152, 00:23:40.907 "send_buf_size": 2097152, 00:23:40.907 "enable_recv_pipe": true, 00:23:40.907 "enable_quickack": false, 00:23:40.907 "enable_placement_id": 0, 00:23:40.907 "enable_zerocopy_send_server": true, 00:23:40.907 "enable_zerocopy_send_client": false, 00:23:40.907 "zerocopy_threshold": 0, 00:23:40.907 "tls_version": 0, 00:23:40.907 "enable_ktls": false 00:23:40.907 } 00:23:40.907 } 00:23:40.907 ] 00:23:40.907 }, 00:23:40.907 { 00:23:40.907 "subsystem": "vmd", 00:23:40.907 "config": [] 00:23:40.907 }, 00:23:40.907 { 00:23:40.907 "subsystem": "accel", 00:23:40.907 "config": [ 00:23:40.907 { 00:23:40.907 "method": "accel_set_options", 00:23:40.907 "params": { 00:23:40.907 "small_cache_size": 128, 00:23:40.907 "large_cache_size": 16, 00:23:40.907 "task_count": 2048, 00:23:40.908 "sequence_count": 2048, 00:23:40.908 "buf_count": 2048 00:23:40.908 } 00:23:40.908 } 00:23:40.908 ] 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "subsystem": "bdev", 00:23:40.908 "config": [ 00:23:40.908 { 00:23:40.908 "method": "bdev_set_options", 00:23:40.908 "params": { 00:23:40.908 "bdev_io_pool_size": 65535, 00:23:40.908 "bdev_io_cache_size": 256, 00:23:40.908 "bdev_auto_examine": true, 00:23:40.908 "iobuf_small_cache_size": 128, 00:23:40.908 "iobuf_large_cache_size": 16 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_raid_set_options", 00:23:40.908 "params": { 00:23:40.908 "process_window_size_kb": 1024 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_iscsi_set_options", 00:23:40.908 "params": { 00:23:40.908 "timeout_sec": 30 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_nvme_set_options", 00:23:40.908 "params": { 00:23:40.908 "action_on_timeout": "none", 00:23:40.908 "timeout_us": 0, 00:23:40.908 "timeout_admin_us": 0, 00:23:40.908 "keep_alive_timeout_ms": 10000, 00:23:40.908 "arbitration_burst": 0, 00:23:40.908 "low_priority_weight": 0, 00:23:40.908 "medium_priority_weight": 0, 00:23:40.908 "high_priority_weight": 0, 00:23:40.908 "nvme_adminq_poll_period_us": 10000, 00:23:40.908 "nvme_ioq_poll_period_us": 0, 00:23:40.908 "io_queue_requests": 512, 00:23:40.908 "delay_cmd_submit": true, 00:23:40.908 "transport_retry_count": 4, 00:23:40.908 "bdev_retry_count": 3, 00:23:40.908 "transport_ack_timeout": 0, 00:23:40.908 "ctrlr_loss_timeout_sec": 0, 00:23:40.908 "reconnect_delay_sec": 0, 00:23:40.908 "fast_io_fail_timeout_sec": 0, 00:23:40.908 "disable_auto_failback": false, 00:23:40.908 "generate_uuids": false, 00:23:40.908 "transport_tos": 0, 00:23:40.908 "nvme_error_stat": false, 00:23:40.908 "rdma_srq_size": 0, 00:23:40.908 "io_path_stat": false, 00:23:40.908 "allow_accel_sequence": false, 00:23:40.908 "rdma_max_cq_size": 0, 00:23:40.908 "rdma_cm_event_timeout_ms": 0, 00:23:40.908 "dhchap_digests": [ 00:23:40.908 "sha256", 00:23:40.908 "sha384", 00:23:40.908 "sha512" 00:23:40.908 ], 00:23:40.908 "dhchap_dhgroups": [ 00:23:40.908 "null", 00:23:40.908 "ffdhe2048", 00:23:40.908 "ffdhe3072", 00:23:40.908 "ffdhe4096", 00:23:40.908 "ffdhe6144", 00:23:40.908 "ffdhe8192" 00:23:40.908 ] 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_nvme_attach_controller", 00:23:40.908 "params": { 00:23:40.908 "name": "nvme0", 00:23:40.908 "trtype": "TCP", 00:23:40.908 "adrfam": "IPv4", 00:23:40.908 "traddr": "10.0.0.2", 00:23:40.908 "trsvcid": "4420", 00:23:40.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.908 "prchk_reftag": false, 00:23:40.908 "prchk_guard": false, 00:23:40.908 "ctrlr_loss_timeout_sec": 0, 00:23:40.908 "reconnect_delay_sec": 0, 00:23:40.908 "fast_io_fail_timeout_sec": 0, 00:23:40.908 "psk": "key0", 00:23:40.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.908 "hdgst": false, 00:23:40.908 "ddgst": false 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_nvme_set_hotplug", 00:23:40.908 "params": { 00:23:40.908 "period_us": 100000, 00:23:40.908 "enable": false 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_enable_histogram", 00:23:40.908 "params": { 00:23:40.908 "name": "nvme0n1", 00:23:40.908 "enable": true 00:23:40.908 } 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "method": "bdev_wait_for_examine" 00:23:40.908 } 00:23:40.908 ] 00:23:40.908 }, 00:23:40.908 { 00:23:40.908 "subsystem": "nbd", 00:23:40.908 "config": [] 00:23:40.908 } 00:23:40.908 ] 00:23:40.908 }' 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3239248 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3239248 ']' 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3239248 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3239248 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3239248' 00:23:40.908 killing process with pid 3239248 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3239248 00:23:40.908 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.908 00:23:40.908 Latency(us) 00:23:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.908 =================================================================================================================== 00:23:40.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.908 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3239248 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3239228 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3239228 ']' 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3239228 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3239228 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3239228' 00:23:41.168 killing process with pid 3239228 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3239228 00:23:41.168 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3239228 00:23:41.426 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:41.426 20:11:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.426 20:11:28 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:41.426 "subsystems": [ 00:23:41.426 { 00:23:41.426 "subsystem": "keyring", 00:23:41.426 "config": [ 00:23:41.426 { 00:23:41.426 "method": "keyring_file_add_key", 00:23:41.426 "params": { 00:23:41.426 "name": "key0", 00:23:41.426 "path": "/tmp/tmp.ksIJTUxwMO" 00:23:41.426 } 00:23:41.426 } 00:23:41.426 ] 00:23:41.426 }, 00:23:41.426 { 00:23:41.426 "subsystem": "iobuf", 00:23:41.426 "config": [ 00:23:41.426 { 00:23:41.426 "method": "iobuf_set_options", 00:23:41.426 "params": { 00:23:41.426 "small_pool_count": 8192, 00:23:41.426 "large_pool_count": 1024, 00:23:41.426 "small_bufsize": 8192, 00:23:41.426 "large_bufsize": 135168 00:23:41.426 } 00:23:41.426 } 00:23:41.426 ] 00:23:41.426 }, 00:23:41.426 { 00:23:41.426 "subsystem": "sock", 00:23:41.426 "config": [ 00:23:41.426 { 00:23:41.426 "method": "sock_set_default_impl", 00:23:41.426 "params": { 00:23:41.426 "impl_name": "posix" 00:23:41.426 } 00:23:41.426 }, 00:23:41.426 { 00:23:41.426 "method": "sock_impl_set_options", 00:23:41.426 "params": { 00:23:41.426 "impl_name": "ssl", 00:23:41.426 "recv_buf_size": 4096, 00:23:41.426 "send_buf_size": 4096, 00:23:41.426 "enable_recv_pipe": true, 00:23:41.426 "enable_quickack": false, 00:23:41.426 "enable_placement_id": 0, 00:23:41.426 "enable_zerocopy_send_server": true, 00:23:41.426 "enable_zerocopy_send_client": false, 00:23:41.426 "zerocopy_threshold": 0, 00:23:41.426 "tls_version": 0, 00:23:41.426 "enable_ktls": false 00:23:41.426 } 00:23:41.426 }, 00:23:41.426 { 00:23:41.426 "method": "sock_impl_set_options", 00:23:41.426 "params": { 00:23:41.426 "impl_name": "posix", 00:23:41.426 "recv_buf_size": 2097152, 00:23:41.426 "send_buf_size": 2097152, 00:23:41.426 "enable_recv_pipe": true, 00:23:41.426 "enable_quickack": false, 00:23:41.426 "enable_placement_id": 0, 00:23:41.426 "enable_zerocopy_send_server": true, 00:23:41.426 "enable_zerocopy_send_client": false, 00:23:41.426 "zerocopy_threshold": 0, 00:23:41.426 "tls_version": 0, 00:23:41.426 "enable_ktls": false 00:23:41.426 } 00:23:41.426 } 00:23:41.426 ] 00:23:41.426 }, 00:23:41.426 { 00:23:41.426 "subsystem": "vmd", 00:23:41.426 "config": [] 00:23:41.426 }, 00:23:41.426 { 00:23:41.426 "subsystem": "accel", 00:23:41.426 "config": [ 00:23:41.426 { 00:23:41.426 "method": "accel_set_options", 00:23:41.426 "params": { 00:23:41.426 "small_cache_size": 128, 00:23:41.426 "large_cache_size": 16, 00:23:41.426 "task_count": 2048, 00:23:41.426 "sequence_count": 2048, 00:23:41.426 "buf_count": 2048 00:23:41.426 } 00:23:41.426 } 00:23:41.426 ] 00:23:41.426 }, 00:23:41.426 { 00:23:41.427 "subsystem": "bdev", 00:23:41.427 "config": [ 00:23:41.427 { 00:23:41.427 "method": "bdev_set_options", 00:23:41.427 "params": { 00:23:41.427 "bdev_io_pool_size": 65535, 00:23:41.427 "bdev_io_cache_size": 256, 00:23:41.427 "bdev_auto_examine": true, 00:23:41.427 "iobuf_small_cache_size": 128, 00:23:41.427 "iobuf_large_cache_size": 16 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "bdev_raid_set_options", 00:23:41.427 "params": { 00:23:41.427 "process_window_size_kb": 1024 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "bdev_iscsi_set_options", 00:23:41.427 "params": { 00:23:41.427 "timeout_sec": 30 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "bdev_nvme_set_options", 00:23:41.427 "params": { 00:23:41.427 "action_on_timeout": "none", 00:23:41.427 "timeout_us": 0, 00:23:41.427 "timeout_admin_us": 0, 00:23:41.427 "keep_alive_timeout_ms": 10000, 00:23:41.427 "arbitration_burst": 0, 00:23:41.427 "low_priority_weight": 0, 00:23:41.427 "medium_priority_weight": 0, 00:23:41.427 "high_priority_weight": 0, 00:23:41.427 "nvme_adminq_poll_period_us": 10000, 00:23:41.427 "nvme_ioq_poll_period_us": 0, 00:23:41.427 "io_queue_requests": 0, 00:23:41.427 "delay_cmd_submit": true, 00:23:41.427 "transport_retry_count": 4, 00:23:41.427 "bdev_retry_count": 3, 00:23:41.427 "transport_ack_timeout": 0, 00:23:41.427 "ctrlr_loss_timeout_sec": 0, 00:23:41.427 "reconnect_delay_sec": 0, 00:23:41.427 "fast_io_fail_timeout_sec": 0, 00:23:41.427 "disable_auto_failback": false, 00:23:41.427 "generate_uuids": false, 00:23:41.427 "transport_tos": 0, 00:23:41.427 "nvme_error_stat": false, 00:23:41.427 "rdma_srq_size": 0, 00:23:41.427 "io_path_stat": false, 00:23:41.427 "allow_accel_sequence": false, 00:23:41.427 "rdma_max_cq_size": 0, 00:23:41.427 "rdma_cm_event_timeout_ms": 0, 00:23:41.427 "dhchap_digests": [ 00:23:41.427 "sha256", 00:23:41.427 "sha384", 00:23:41.427 "sha512" 00:23:41.427 ], 00:23:41.427 "dhchap_dhgroups": [ 00:23:41.427 "null", 00:23:41.427 "ffdhe2048", 00:23:41.427 "ffdhe3072", 00:23:41.427 "ffdhe4096", 00:23:41.427 "ffdhe6144", 00:23:41.427 "ffdhe8192" 00:23:41.427 ] 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "bdev_nvme_set_hotplug", 00:23:41.427 "params": { 00:23:41.427 "period_us": 100000, 00:23:41.427 "enable": false 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "bdev_malloc_create", 00:23:41.427 "params": { 00:23:41.427 "name": "malloc0", 00:23:41.427 "num_blocks": 8192, 00:23:41.427 "block_size": 4096, 00:23:41.427 "physical_block_size": 4096, 00:23:41.427 "uuid": "f643e416-5c02-475c-88aa-42dfa523b88b", 00:23:41.427 "optimal_io_boundary": 0 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "bdev_wait_for_examine" 00:23:41.427 } 00:23:41.427 ] 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "subsystem": "nbd", 00:23:41.427 "config": [] 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "subsystem": "scheduler", 00:23:41.427 "config": [ 00:23:41.427 { 00:23:41.427 "method": "framework_set_scheduler", 00:23:41.427 "params": { 00:23:41.427 "name": "static" 00:23:41.427 } 00:23:41.427 } 00:23:41.427 ] 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "subsystem": "nvmf", 00:23:41.427 "config": [ 00:23:41.427 { 00:23:41.427 "method": "nvmf_set_config", 00:23:41.427 "params": { 00:23:41.427 "discovery_filter": "match_any", 00:23:41.427 "admin_cmd_passthru": { 00:23:41.427 "identify_ctrlr": false 00:23:41.427 } 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_set_max_subsystems", 00:23:41.427 "params": { 00:23:41.427 "max_subsystems": 1024 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_set_crdt", 00:23:41.427 "params": { 00:23:41.427 "crdt1": 0, 00:23:41.427 "crdt2": 0, 00:23:41.427 "crdt3": 0 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_create_transport", 00:23:41.427 "params": { 00:23:41.427 "trtype": "TCP", 00:23:41.427 "max_queue_depth": 128, 00:23:41.427 "max_io_qpairs_per_ctrlr": 127, 00:23:41.427 "in_capsule_data_size": 4096, 00:23:41.427 "max_io_size": 131072, 00:23:41.427 "io_unit_size": 131072, 00:23:41.427 "max_aq_depth": 128, 00:23:41.427 "num_shared_buffers": 511, 00:23:41.427 "buf_cache_size": 4294967295, 00:23:41.427 "dif_insert_or_strip": false, 00:23:41.427 "zcopy": false, 00:23:41.427 "c2h_success": false, 00:23:41.427 "sock_priority": 0, 00:23:41.427 "abort_timeout_sec": 1, 00:23:41.427 "ack_timeout": 0, 00:23:41.427 "data_wr_pool_size": 0 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_create_subsystem", 00:23:41.427 "params": { 00:23:41.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.427 "allow_any_host": false, 00:23:41.427 "serial_number": "00000000000000000000", 00:23:41.427 "model_number": "SPDK bdev Controller", 00:23:41.427 "max_namespaces": 32, 00:23:41.427 "min_cntlid": 1, 00:23:41.427 "max_cntlid": 65519, 00:23:41.427 "ana_reporting": false 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_subsystem_add_host", 00:23:41.427 "params": { 00:23:41.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.427 "host": "nqn.2016-06.io.spdk:host1", 00:23:41.427 "psk": "key0" 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_subsystem_add_ns", 00:23:41.427 "params": { 00:23:41.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.427 "namespace": { 00:23:41.427 "nsid": 1, 00:23:41.427 "bdev_name": "malloc0", 00:23:41.427 "nguid": "F643E4165C02475C88AA42DFA523B88B", 00:23:41.427 "uuid": "f643e416-5c02-475c-88aa-42dfa523b88b", 00:23:41.427 "no_auto_visible": false 00:23:41.427 } 00:23:41.427 } 00:23:41.427 }, 00:23:41.427 { 00:23:41.427 "method": "nvmf_subsystem_add_listener", 00:23:41.427 "params": { 00:23:41.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.427 "listen_address": { 00:23:41.427 "trtype": "TCP", 00:23:41.427 "adrfam": "IPv4", 00:23:41.427 "traddr": "10.0.0.2", 00:23:41.427 "trsvcid": "4420" 00:23:41.427 }, 00:23:41.427 "secure_channel": true 00:23:41.427 } 00:23:41.427 } 00:23:41.427 ] 00:23:41.427 } 00:23:41.427 ] 00:23:41.427 }' 00:23:41.427 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:41.427 20:11:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3239656 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3239656 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3239656 ']' 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:41.427 20:11:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.427 [2024-07-13 20:11:29.054538] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:41.427 [2024-07-13 20:11:29.054627] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.685 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.685 [2024-07-13 20:11:29.127332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.685 [2024-07-13 20:11:29.216083] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.685 [2024-07-13 20:11:29.216149] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.685 [2024-07-13 20:11:29.216166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.685 [2024-07-13 20:11:29.216179] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.685 [2024-07-13 20:11:29.216191] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.685 [2024-07-13 20:11:29.216277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.942 [2024-07-13 20:11:29.453214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.942 [2024-07-13 20:11:29.485233] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.942 [2024-07-13 20:11:29.498077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.508 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:42.508 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:42.508 20:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3239808 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3239808 /var/tmp/bdevperf.sock 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 3239808 ']' 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.509 20:11:30 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:42.509 "subsystems": [ 00:23:42.509 { 00:23:42.509 "subsystem": "keyring", 00:23:42.509 "config": [ 00:23:42.509 { 00:23:42.509 "method": "keyring_file_add_key", 00:23:42.509 "params": { 00:23:42.509 "name": "key0", 00:23:42.509 "path": "/tmp/tmp.ksIJTUxwMO" 00:23:42.509 } 00:23:42.509 } 00:23:42.509 ] 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "subsystem": "iobuf", 00:23:42.509 "config": [ 00:23:42.509 { 00:23:42.509 "method": "iobuf_set_options", 00:23:42.509 "params": { 00:23:42.509 "small_pool_count": 8192, 00:23:42.509 "large_pool_count": 1024, 00:23:42.509 "small_bufsize": 8192, 00:23:42.509 "large_bufsize": 135168 00:23:42.509 } 00:23:42.509 } 00:23:42.509 ] 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "subsystem": "sock", 00:23:42.509 "config": [ 00:23:42.509 { 00:23:42.509 "method": "sock_set_default_impl", 00:23:42.509 "params": { 00:23:42.509 "impl_name": "posix" 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "sock_impl_set_options", 00:23:42.509 "params": { 00:23:42.509 "impl_name": "ssl", 00:23:42.509 "recv_buf_size": 4096, 00:23:42.509 "send_buf_size": 4096, 00:23:42.509 "enable_recv_pipe": true, 00:23:42.509 "enable_quickack": false, 00:23:42.509 "enable_placement_id": 0, 00:23:42.509 "enable_zerocopy_send_server": true, 00:23:42.509 "enable_zerocopy_send_client": false, 00:23:42.509 "zerocopy_threshold": 0, 00:23:42.509 "tls_version": 0, 00:23:42.509 "enable_ktls": false 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "sock_impl_set_options", 00:23:42.509 "params": { 00:23:42.509 "impl_name": "posix", 00:23:42.509 "recv_buf_size": 2097152, 00:23:42.509 "send_buf_size": 2097152, 00:23:42.509 "enable_recv_pipe": true, 00:23:42.509 "enable_quickack": false, 00:23:42.509 "enable_placement_id": 0, 00:23:42.509 "enable_zerocopy_send_server": true, 00:23:42.509 "enable_zerocopy_send_client": false, 00:23:42.509 "zerocopy_threshold": 0, 00:23:42.509 "tls_version": 0, 00:23:42.509 "enable_ktls": false 00:23:42.509 } 00:23:42.509 } 00:23:42.509 ] 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "subsystem": "vmd", 00:23:42.509 "config": [] 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "subsystem": "accel", 00:23:42.509 "config": [ 00:23:42.509 { 00:23:42.509 "method": "accel_set_options", 00:23:42.509 "params": { 00:23:42.509 "small_cache_size": 128, 00:23:42.509 "large_cache_size": 16, 00:23:42.509 "task_count": 2048, 00:23:42.509 "sequence_count": 2048, 00:23:42.509 "buf_count": 2048 00:23:42.509 } 00:23:42.509 } 00:23:42.509 ] 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "subsystem": "bdev", 00:23:42.509 "config": [ 00:23:42.509 { 00:23:42.509 "method": "bdev_set_options", 00:23:42.509 "params": { 00:23:42.509 "bdev_io_pool_size": 65535, 00:23:42.509 "bdev_io_cache_size": 256, 00:23:42.509 "bdev_auto_examine": true, 00:23:42.509 "iobuf_small_cache_size": 128, 00:23:42.509 "iobuf_large_cache_size": 16 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_raid_set_options", 00:23:42.509 "params": { 00:23:42.509 "process_window_size_kb": 1024 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_iscsi_set_options", 00:23:42.509 "params": { 00:23:42.509 "timeout_sec": 30 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_nvme_set_options", 00:23:42.509 "params": { 00:23:42.509 "action_on_timeout": "none", 00:23:42.509 "timeout_us": 0, 00:23:42.509 "timeout_admin_us": 0, 00:23:42.509 "keep_alive_timeout_ms": 10000, 00:23:42.509 "arbitration_burst": 0, 00:23:42.509 "low_priority_weight": 0, 00:23:42.509 "medium_priority_weight": 0, 00:23:42.509 "high_priority_weight": 0, 00:23:42.509 "nvme_adminq_poll_period_us": 10000, 00:23:42.509 "nvme_ioq_poll_period_us": 0, 00:23:42.509 "io_queue_requests": 512, 00:23:42.509 "delay_cmd_submit": true, 00:23:42.509 "transport_retry_count": 4, 00:23:42.509 "bdev_retry_count": 3, 00:23:42.509 "transport_ack_timeout": 0, 00:23:42.509 "ctrlr_loss_timeout_sec": 0, 00:23:42.509 "reconnect_delay_sec": 0, 00:23:42.509 "fast_io_fail_timeout_sec": 0, 00:23:42.509 "disable_auto_failback": false, 00:23:42.509 "generate_uuids": false, 00:23:42.509 "transport_tos": 0, 00:23:42.509 "nvme_error_stat": false, 00:23:42.509 "rdma_srq_size": 0, 00:23:42.509 "io_path_stat": false, 00:23:42.509 "allow_accel_sequence": false, 00:23:42.509 "rdma_max_cq_size": 0, 00:23:42.509 "rdma_cm_event_timeout_ms": 0, 00:23:42.509 "dhchap_digests": [ 00:23:42.509 "sha256", 00:23:42.509 "sha384", 00:23:42.509 "sha512" 00:23:42.509 ], 00:23:42.509 "dhchap_dhgroups": [ 00:23:42.509 "null", 00:23:42.509 "ffdhe2048", 00:23:42.509 "ffdhe3072", 00:23:42.509 "ffdhe4096", 00:23:42.509 "ffdhe6144", 00:23:42.509 "ffdhe8192" 00:23:42.509 ] 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_nvme_attach_controller", 00:23:42.509 "params": { 00:23:42.509 "name": "nvme0", 00:23:42.509 "trtype": "TCP", 00:23:42.509 "adrfam": "IPv4", 00:23:42.509 "traddr": "10.0.0.2", 00:23:42.509 "trsvcid": "4420", 00:23:42.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:42.509 "prchk_reftag": false, 00:23:42.509 "prchk_guard": false, 00:23:42.509 "ctrlr_loss_timeout_sec": 0, 00:23:42.509 "reconnect_delay_sec": 0, 00:23:42.509 "fast_io_fail_timeout_sec": 0, 00:23:42.509 "psk": "key0", 00:23:42.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:42.509 "hdgst": false, 00:23:42.509 "ddgst": false 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_nvme_set_hotplug", 00:23:42.509 "params": { 00:23:42.509 "period_us": 100000, 00:23:42.509 "enable": false 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_enable_histogram", 00:23:42.509 "params": { 00:23:42.509 "name": "nvme0n1", 00:23:42.509 "enable": true 00:23:42.509 } 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "method": "bdev_wait_for_examine" 00:23:42.509 } 00:23:42.509 ] 00:23:42.509 }, 00:23:42.509 { 00:23:42.509 "subsystem": "nbd", 00:23:42.509 "config": [] 00:23:42.509 } 00:23:42.509 ] 00:23:42.509 }' 00:23:42.509 [2024-07-13 20:11:30.088377] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:42.509 [2024-07-13 20:11:30.088452] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239808 ] 00:23:42.509 EAL: No free 2048 kB hugepages reported on node 1 00:23:42.509 [2024-07-13 20:11:30.152106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.769 [2024-07-13 20:11:30.242602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.769 [2024-07-13 20:11:30.424707] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:43.703 20:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:43.703 20:11:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:23:43.703 20:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:43.703 20:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:43.703 20:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.703 20:11:31 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.962 Running I/O for 1 seconds... 00:23:44.899 00:23:44.899 Latency(us) 00:23:44.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.899 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:44.899 Verification LBA range: start 0x0 length 0x2000 00:23:44.899 nvme0n1 : 1.06 2015.58 7.87 0.00 0.00 62012.22 6310.87 90876.59 00:23:44.899 =================================================================================================================== 00:23:44.899 Total : 2015.58 7.87 0.00 0.00 62012.22 6310.87 90876.59 00:23:44.899 0 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:44.899 nvmf_trace.0 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3239808 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3239808 ']' 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3239808 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:44.899 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3239808 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3239808' 00:23:45.157 killing process with pid 3239808 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3239808 00:23:45.157 Received shutdown signal, test time was about 1.000000 seconds 00:23:45.157 00:23:45.157 Latency(us) 00:23:45.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.157 =================================================================================================================== 00:23:45.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3239808 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:45.157 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:45.157 rmmod nvme_tcp 00:23:45.157 rmmod nvme_fabrics 00:23:45.414 rmmod nvme_keyring 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3239656 ']' 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3239656 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 3239656 ']' 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 3239656 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3239656 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3239656' 00:23:45.414 killing process with pid 3239656 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 3239656 00:23:45.414 20:11:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 3239656 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.673 20:11:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.579 20:11:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.579 20:11:35 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.fDJ9sCHpSn /tmp/tmp.iVxDVJYhuK /tmp/tmp.ksIJTUxwMO 00:23:47.579 00:23:47.579 real 1m19.212s 00:23:47.579 user 1m59.361s 00:23:47.579 sys 0m28.033s 00:23:47.579 20:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:47.579 20:11:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.579 ************************************ 00:23:47.579 END TEST nvmf_tls 00:23:47.579 ************************************ 00:23:47.579 20:11:35 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:47.579 20:11:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:47.579 20:11:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:47.579 20:11:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:47.579 ************************************ 00:23:47.579 START TEST nvmf_fips 00:23:47.579 ************************************ 00:23:47.579 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:47.838 * Looking for test storage... 00:23:47.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.838 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:47.839 Error setting digest 00:23:47.839 00A2BF5E717F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:47.839 00A2BF5E717F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.839 20:11:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.741 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:50.001 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:50.002 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:50.002 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:50.002 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:50.002 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:50.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:50.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:50.002 00:23:50.002 --- 10.0.0.2 ping statistics --- 00:23:50.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.002 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:50.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:50.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:23:50.002 00:23:50.002 --- 10.0.0.1 ping statistics --- 00:23:50.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:50.002 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3242066 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3242066 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3242066 ']' 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:50.002 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:50.003 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:50.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:50.003 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:50.003 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.003 [2024-07-13 20:11:37.641668] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:50.003 [2024-07-13 20:11:37.641762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.262 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.262 [2024-07-13 20:11:37.708184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.262 [2024-07-13 20:11:37.796127] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.262 [2024-07-13 20:11:37.796207] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.262 [2024-07-13 20:11:37.796236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.262 [2024-07-13 20:11:37.796247] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.262 [2024-07-13 20:11:37.796257] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.262 [2024-07-13 20:11:37.796285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.262 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:50.262 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:50.262 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.262 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:50.262 20:11:37 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:50.520 20:11:37 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.520 [2024-07-13 20:11:38.175347] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.778 [2024-07-13 20:11:38.191328] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.778 [2024-07-13 20:11:38.191546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.778 [2024-07-13 20:11:38.222489] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:50.778 malloc0 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3242200 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3242200 /var/tmp/bdevperf.sock 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 3242200 ']' 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:50.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:50.778 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:50.778 [2024-07-13 20:11:38.306750] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:23:50.778 [2024-07-13 20:11:38.306830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242200 ] 00:23:50.778 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.778 [2024-07-13 20:11:38.364240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.037 [2024-07-13 20:11:38.449941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.037 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:51.037 20:11:38 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:23:51.037 20:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:51.295 [2024-07-13 20:11:38.780388] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:51.295 [2024-07-13 20:11:38.780537] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:51.295 TLSTESTn1 00:23:51.295 20:11:38 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.554 Running I/O for 10 seconds... 00:24:01.590 00:24:01.590 Latency(us) 00:24:01.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.590 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.590 Verification LBA range: start 0x0 length 0x2000 00:24:01.590 TLSTESTn1 : 10.05 2257.08 8.82 0.00 0.00 56553.75 8543.95 88546.42 00:24:01.590 =================================================================================================================== 00:24:01.590 Total : 2257.08 8.82 0.00 0.00 56553.75 8543.95 88546.42 00:24:01.590 0 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:01.590 nvmf_trace.0 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3242200 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3242200 ']' 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3242200 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3242200 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3242200' 00:24:01.590 killing process with pid 3242200 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3242200 00:24:01.590 Received shutdown signal, test time was about 10.000000 seconds 00:24:01.590 00:24:01.590 Latency(us) 00:24:01.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.590 =================================================================================================================== 00:24:01.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.590 [2024-07-13 20:11:49.152673] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:01.590 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3242200 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.850 rmmod nvme_tcp 00:24:01.850 rmmod nvme_fabrics 00:24:01.850 rmmod nvme_keyring 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3242066 ']' 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3242066 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 3242066 ']' 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 3242066 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3242066 00:24:01.850 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:01.851 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:01.851 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3242066' 00:24:01.851 killing process with pid 3242066 00:24:01.851 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 3242066 00:24:01.851 [2024-07-13 20:11:49.447290] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:01.851 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 3242066 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.110 20:11:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:04.639 00:24:04.639 real 0m16.546s 00:24:04.639 user 0m19.973s 00:24:04.639 sys 0m6.630s 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:04.639 ************************************ 00:24:04.639 END TEST nvmf_fips 00:24:04.639 ************************************ 00:24:04.639 20:11:51 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:04.639 20:11:51 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:04.639 20:11:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:04.639 20:11:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:04.639 20:11:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.639 ************************************ 00:24:04.639 START TEST nvmf_fuzz 00:24:04.639 ************************************ 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:04.639 * Looking for test storage... 00:24:04.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.639 20:11:51 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.542 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:24:06.543 00:24:06.543 --- 10.0.0.2 ping statistics --- 00:24:06.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.543 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.543 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.543 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:24:06.543 00:24:06.543 --- 10.0.0.1 ping statistics --- 00:24:06.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.543 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3245448 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3245448 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 3245448 ']' 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:06.543 20:11:53 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.543 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.543 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.803 Malloc0 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:06.803 20:11:54 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:38.879 Fuzzing completed. Shutting down the fuzz application 00:24:38.879 00:24:38.879 Dumping successful admin opcodes: 00:24:38.879 8, 9, 10, 24, 00:24:38.879 Dumping successful io opcodes: 00:24:38.879 0, 9, 00:24:38.879 NS: 0x200003aeff00 I/O qp, Total commands completed: 471014, total successful commands: 2717, random_seed: 2823323584 00:24:38.879 NS: 0x200003aeff00 admin qp, Total commands completed: 57887, total successful commands: 462, random_seed: 3099764864 00:24:38.879 20:12:24 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:38.879 Fuzzing completed. Shutting down the fuzz application 00:24:38.879 00:24:38.879 Dumping successful admin opcodes: 00:24:38.879 24, 00:24:38.879 Dumping successful io opcodes: 00:24:38.879 00:24:38.879 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4249590833 00:24:38.879 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4249712789 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:38.879 rmmod nvme_tcp 00:24:38.879 rmmod nvme_fabrics 00:24:38.879 rmmod nvme_keyring 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3245448 ']' 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3245448 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 3245448 ']' 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 3245448 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3245448 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:38.879 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3245448' 00:24:38.880 killing process with pid 3245448 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 3245448 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 3245448 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.880 20:12:26 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.419 20:12:28 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.419 20:12:28 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:41.419 00:24:41.419 real 0m36.719s 00:24:41.419 user 0m51.184s 00:24:41.419 sys 0m15.003s 00:24:41.419 20:12:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:41.419 20:12:28 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.419 ************************************ 00:24:41.419 END TEST nvmf_fuzz 00:24:41.419 ************************************ 00:24:41.419 20:12:28 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:41.419 20:12:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:41.419 20:12:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:41.419 20:12:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:41.419 ************************************ 00:24:41.419 START TEST nvmf_multiconnection 00:24:41.419 ************************************ 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:41.419 * Looking for test storage... 00:24:41.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.419 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:41.420 20:12:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:42.884 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:42.884 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:42.884 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.884 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:42.885 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.885 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:43.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:43.169 00:24:43.169 --- 10.0.0.2 ping statistics --- 00:24:43.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.169 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:24:43.169 00:24:43.169 --- 10.0.0.1 ping statistics --- 00:24:43.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.169 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3251659 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3251659 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 3251659 ']' 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:43.169 20:12:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.169 [2024-07-13 20:12:30.710062] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:24:43.169 [2024-07-13 20:12:30.710133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.169 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.169 [2024-07-13 20:12:30.778407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:43.427 [2024-07-13 20:12:30.874581] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.427 [2024-07-13 20:12:30.874627] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.427 [2024-07-13 20:12:30.874655] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.427 [2024-07-13 20:12:30.874666] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.427 [2024-07-13 20:12:30.874675] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.427 [2024-07-13 20:12:30.874770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.427 [2024-07-13 20:12:30.874824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.427 [2024-07-13 20:12:30.874892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.427 [2024-07-13 20:12:30.874889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.427 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:43.427 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.428 [2024-07-13 20:12:31.039763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.428 Malloc1 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.428 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 [2024-07-13 20:12:31.097127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 Malloc2 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 Malloc3 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 Malloc4 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 Malloc5 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 Malloc6 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.688 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.946 Malloc7 00:24:43.946 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.946 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 Malloc8 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 Malloc9 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 Malloc10 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 Malloc11 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:43.947 20:12:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:44.882 20:12:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:44.882 20:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:44.882 20:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:44.882 20:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:44.882 20:12:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.780 20:12:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:47.349 20:12:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:47.349 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:47.349 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.349 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:47.349 20:12:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:49.250 20:12:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:49.250 20:12:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:49.250 20:12:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:24:49.508 20:12:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:49.508 20:12:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.508 20:12:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:49.508 20:12:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.508 20:12:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:50.073 20:12:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:50.073 20:12:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:50.073 20:12:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.073 20:12:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:50.073 20:12:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.976 20:12:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:52.914 20:12:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:52.914 20:12:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:52.914 20:12:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.914 20:12:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:52.914 20:12:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.813 20:12:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:55.745 20:12:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:55.745 20:12:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:55.745 20:12:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.745 20:12:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:55.745 20:12:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:24:57.649 20:12:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:57.650 20:12:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:58.217 20:12:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:58.217 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:24:58.217 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.217 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:24:58.217 20:12:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:00.172 20:12:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:00.172 20:12:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:00.172 20:12:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:25:00.430 20:12:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:00.430 20:12:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.430 20:12:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:00.430 20:12:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.430 20:12:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:01.365 20:12:48 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:01.365 20:12:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:01.365 20:12:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.365 20:12:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:01.365 20:12:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.268 20:12:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:03.837 20:12:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:03.837 20:12:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:03.837 20:12:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:03.837 20:12:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:03.837 20:12:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.369 20:12:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:06.937 20:12:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:06.937 20:12:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:06.937 20:12:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.937 20:12:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:06.937 20:12:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.844 20:12:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:09.783 20:12:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:09.783 20:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:09.783 20:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.783 20:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:09.783 20:12:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:11.693 20:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:12.626 20:12:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:12.626 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:25:12.626 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.626 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:12.626 20:12:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:25:14.528 20:13:01 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:14.528 [global] 00:25:14.528 thread=1 00:25:14.528 invalidate=1 00:25:14.528 rw=read 00:25:14.528 time_based=1 00:25:14.528 runtime=10 00:25:14.528 ioengine=libaio 00:25:14.528 direct=1 00:25:14.528 bs=262144 00:25:14.528 iodepth=64 00:25:14.528 norandommap=1 00:25:14.528 numjobs=1 00:25:14.528 00:25:14.528 [job0] 00:25:14.528 filename=/dev/nvme0n1 00:25:14.528 [job1] 00:25:14.528 filename=/dev/nvme10n1 00:25:14.528 [job2] 00:25:14.528 filename=/dev/nvme1n1 00:25:14.528 [job3] 00:25:14.528 filename=/dev/nvme2n1 00:25:14.528 [job4] 00:25:14.528 filename=/dev/nvme3n1 00:25:14.528 [job5] 00:25:14.528 filename=/dev/nvme4n1 00:25:14.528 [job6] 00:25:14.528 filename=/dev/nvme5n1 00:25:14.528 [job7] 00:25:14.528 filename=/dev/nvme6n1 00:25:14.528 [job8] 00:25:14.528 filename=/dev/nvme7n1 00:25:14.528 [job9] 00:25:14.528 filename=/dev/nvme8n1 00:25:14.528 [job10] 00:25:14.528 filename=/dev/nvme9n1 00:25:14.528 Could not set queue depth (nvme0n1) 00:25:14.528 Could not set queue depth (nvme10n1) 00:25:14.528 Could not set queue depth (nvme1n1) 00:25:14.528 Could not set queue depth (nvme2n1) 00:25:14.528 Could not set queue depth (nvme3n1) 00:25:14.528 Could not set queue depth (nvme4n1) 00:25:14.528 Could not set queue depth (nvme5n1) 00:25:14.528 Could not set queue depth (nvme6n1) 00:25:14.528 Could not set queue depth (nvme7n1) 00:25:14.528 Could not set queue depth (nvme8n1) 00:25:14.528 Could not set queue depth (nvme9n1) 00:25:14.786 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:14.786 fio-3.35 00:25:14.786 Starting 11 threads 00:25:27.019 00:25:27.019 job0: (groupid=0, jobs=1): err= 0: pid=3255915: Sat Jul 13 20:13:12 2024 00:25:27.019 read: IOPS=643, BW=161MiB/s (169MB/s)(1625MiB/10104msec) 00:25:27.019 slat (usec): min=9, max=77368, avg=800.96, stdev=3828.40 00:25:27.019 clat (usec): min=1207, max=243122, avg=98631.90, stdev=50521.74 00:25:27.019 lat (usec): min=1229, max=243184, avg=99432.86, stdev=50986.13 00:25:27.019 clat percentiles (msec): 00:25:27.019 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 34], 20.00th=[ 54], 00:25:27.019 | 30.00th=[ 68], 40.00th=[ 80], 50.00th=[ 93], 60.00th=[ 107], 00:25:27.019 | 70.00th=[ 129], 80.00th=[ 146], 90.00th=[ 171], 95.00th=[ 190], 00:25:27.019 | 99.00th=[ 209], 99.50th=[ 215], 99.90th=[ 228], 99.95th=[ 241], 00:25:27.019 | 99.99th=[ 243] 00:25:27.019 bw ( KiB/s): min=92160, max=285696, per=8.57%, avg=164722.50, stdev=54597.05, samples=20 00:25:27.019 iops : min= 360, max= 1116, avg=643.40, stdev=213.30, samples=20 00:25:27.019 lat (msec) : 2=0.06%, 4=0.14%, 10=1.32%, 20=2.97%, 50=13.54% 00:25:27.019 lat (msec) : 100=37.73%, 250=44.23% 00:25:27.019 cpu : usr=0.24%, sys=2.03%, ctx=2034, majf=0, minf=4097 00:25:27.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.019 issued rwts: total=6498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.019 job1: (groupid=0, jobs=1): err= 0: pid=3255916: Sat Jul 13 20:13:12 2024 00:25:27.019 read: IOPS=535, BW=134MiB/s (140MB/s)(1353MiB/10108msec) 00:25:27.019 slat (usec): min=14, max=108929, avg=1646.92, stdev=5634.94 00:25:27.019 clat (msec): min=3, max=279, avg=117.77, stdev=45.64 00:25:27.019 lat (msec): min=3, max=279, avg=119.42, stdev=46.46 00:25:27.019 clat percentiles (msec): 00:25:27.019 | 1.00th=[ 15], 5.00th=[ 37], 10.00th=[ 55], 20.00th=[ 87], 00:25:27.019 | 30.00th=[ 93], 40.00th=[ 101], 50.00th=[ 112], 60.00th=[ 132], 00:25:27.019 | 70.00th=[ 146], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 194], 00:25:27.019 | 99.00th=[ 211], 99.50th=[ 228], 99.90th=[ 249], 99.95th=[ 255], 00:25:27.019 | 99.99th=[ 279] 00:25:27.019 bw ( KiB/s): min=71168, max=247296, per=7.13%, avg=136953.95, stdev=42464.92, samples=20 00:25:27.019 iops : min= 278, max= 966, avg=534.90, stdev=165.90, samples=20 00:25:27.019 lat (msec) : 4=0.02%, 10=0.35%, 20=1.33%, 50=7.02%, 100=30.76% 00:25:27.019 lat (msec) : 250=60.43%, 500=0.09% 00:25:27.019 cpu : usr=0.36%, sys=2.02%, ctx=1302, majf=0, minf=4097 00:25:27.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.019 issued rwts: total=5413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.019 job2: (groupid=0, jobs=1): err= 0: pid=3255917: Sat Jul 13 20:13:12 2024 00:25:27.019 read: IOPS=669, BW=167MiB/s (175MB/s)(1691MiB/10105msec) 00:25:27.019 slat (usec): min=9, max=101937, avg=1289.79, stdev=5125.82 00:25:27.019 clat (usec): min=1613, max=284707, avg=94225.79, stdev=47300.40 00:25:27.019 lat (usec): min=1651, max=294240, avg=95515.57, stdev=48127.88 00:25:27.019 clat percentiles (msec): 00:25:27.019 | 1.00th=[ 9], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 55], 00:25:27.019 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 81], 60.00th=[ 92], 00:25:27.019 | 70.00th=[ 114], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 180], 00:25:27.019 | 99.00th=[ 207], 99.50th=[ 211], 99.90th=[ 251], 99.95th=[ 271], 00:25:27.019 | 99.99th=[ 284] 00:25:27.019 bw ( KiB/s): min=77824, max=303104, per=8.93%, avg=171534.85, stdev=69662.32, samples=20 00:25:27.019 iops : min= 304, max= 1184, avg=670.05, stdev=272.13, samples=20 00:25:27.019 lat (msec) : 2=0.04%, 4=0.09%, 10=1.09%, 20=1.23%, 50=10.91% 00:25:27.019 lat (msec) : 100=51.18%, 250=35.37%, 500=0.09% 00:25:27.019 cpu : usr=0.49%, sys=2.15%, ctx=1521, majf=0, minf=4097 00:25:27.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.019 issued rwts: total=6765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.019 job3: (groupid=0, jobs=1): err= 0: pid=3255918: Sat Jul 13 20:13:12 2024 00:25:27.019 read: IOPS=890, BW=223MiB/s (233MB/s)(2232MiB/10029msec) 00:25:27.019 slat (usec): min=11, max=110284, avg=997.27, stdev=3408.99 00:25:27.019 clat (msec): min=3, max=290, avg=70.85, stdev=34.54 00:25:27.019 lat (msec): min=3, max=290, avg=71.85, stdev=35.08 00:25:27.019 clat percentiles (msec): 00:25:27.019 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 45], 20.00th=[ 49], 00:25:27.019 | 30.00th=[ 52], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 69], 00:25:27.019 | 70.00th=[ 75], 80.00th=[ 86], 90.00th=[ 123], 95.00th=[ 144], 00:25:27.019 | 99.00th=[ 205], 99.50th=[ 215], 99.90th=[ 230], 99.95th=[ 234], 00:25:27.019 | 99.99th=[ 292] 00:25:27.019 bw ( KiB/s): min=88064, max=320512, per=11.81%, avg=226896.00, stdev=68933.11, samples=20 00:25:27.019 iops : min= 344, max= 1252, avg=886.30, stdev=269.27, samples=20 00:25:27.019 lat (msec) : 4=0.11%, 10=0.49%, 20=2.33%, 50=21.99%, 100=60.75% 00:25:27.019 lat (msec) : 250=14.28%, 500=0.04% 00:25:27.019 cpu : usr=0.62%, sys=2.98%, ctx=1960, majf=0, minf=4097 00:25:27.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:27.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.019 issued rwts: total=8927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.019 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.019 job4: (groupid=0, jobs=1): err= 0: pid=3255919: Sat Jul 13 20:13:12 2024 00:25:27.019 read: IOPS=746, BW=187MiB/s (196MB/s)(1881MiB/10078msec) 00:25:27.019 slat (usec): min=14, max=46524, avg=1319.67, stdev=3377.24 00:25:27.019 clat (msec): min=8, max=169, avg=84.34, stdev=26.34 00:25:27.019 lat (msec): min=8, max=179, avg=85.66, stdev=26.75 00:25:27.019 clat percentiles (msec): 00:25:27.019 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 61], 00:25:27.019 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 92], 00:25:27.019 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 127], 00:25:27.019 | 99.00th=[ 146], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 169], 00:25:27.019 | 99.99th=[ 169] 00:25:27.020 bw ( KiB/s): min=123904, max=342016, per=9.94%, avg=190986.20, stdev=56690.89, samples=20 00:25:27.020 iops : min= 484, max= 1336, avg=746.00, stdev=221.48, samples=20 00:25:27.020 lat (msec) : 10=0.13%, 20=0.04%, 50=11.16%, 100=59.16%, 250=29.51% 00:25:27.020 cpu : usr=0.40%, sys=2.64%, ctx=1609, majf=0, minf=4097 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=7524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 job5: (groupid=0, jobs=1): err= 0: pid=3255920: Sat Jul 13 20:13:12 2024 00:25:27.020 read: IOPS=578, BW=145MiB/s (152MB/s)(1461MiB/10103msec) 00:25:27.020 slat (usec): min=9, max=103519, avg=1459.64, stdev=5406.99 00:25:27.020 clat (usec): min=1025, max=299723, avg=109122.46, stdev=42340.13 00:25:27.020 lat (usec): min=1048, max=299807, avg=110582.10, stdev=43129.62 00:25:27.020 clat percentiles (msec): 00:25:27.020 | 1.00th=[ 11], 5.00th=[ 47], 10.00th=[ 62], 20.00th=[ 78], 00:25:27.020 | 30.00th=[ 88], 40.00th=[ 94], 50.00th=[ 104], 60.00th=[ 114], 00:25:27.020 | 70.00th=[ 125], 80.00th=[ 148], 90.00th=[ 169], 95.00th=[ 186], 00:25:27.020 | 99.00th=[ 211], 99.50th=[ 220], 99.90th=[ 247], 99.95th=[ 266], 00:25:27.020 | 99.99th=[ 300] 00:25:27.020 bw ( KiB/s): min=87040, max=249856, per=7.70%, avg=147928.65, stdev=48576.09, samples=20 00:25:27.020 iops : min= 340, max= 976, avg=577.80, stdev=189.76, samples=20 00:25:27.020 lat (msec) : 2=0.02%, 4=0.07%, 10=0.89%, 20=1.03%, 50=4.18% 00:25:27.020 lat (msec) : 100=40.48%, 250=53.29%, 500=0.05% 00:25:27.020 cpu : usr=0.31%, sys=1.96%, ctx=1427, majf=0, minf=4097 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=5842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 job6: (groupid=0, jobs=1): err= 0: pid=3255921: Sat Jul 13 20:13:12 2024 00:25:27.020 read: IOPS=517, BW=129MiB/s (136MB/s)(1308MiB/10100msec) 00:25:27.020 slat (usec): min=12, max=81406, avg=1749.33, stdev=5332.92 00:25:27.020 clat (msec): min=3, max=263, avg=121.71, stdev=40.99 00:25:27.020 lat (msec): min=3, max=263, avg=123.46, stdev=41.84 00:25:27.020 clat percentiles (msec): 00:25:27.020 | 1.00th=[ 21], 5.00th=[ 57], 10.00th=[ 84], 20.00th=[ 90], 00:25:27.020 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 115], 60.00th=[ 131], 00:25:27.020 | 70.00th=[ 146], 80.00th=[ 159], 90.00th=[ 178], 95.00th=[ 194], 00:25:27.020 | 99.00th=[ 211], 99.50th=[ 222], 99.90th=[ 234], 99.95th=[ 247], 00:25:27.020 | 99.99th=[ 264] 00:25:27.020 bw ( KiB/s): min=78336, max=209500, per=6.89%, avg=132279.80, stdev=37855.71, samples=20 00:25:27.020 iops : min= 306, max= 818, avg=516.70, stdev=147.84, samples=20 00:25:27.020 lat (msec) : 4=0.04%, 10=0.32%, 20=0.52%, 50=3.21%, 100=31.50% 00:25:27.020 lat (msec) : 250=64.39%, 500=0.02% 00:25:27.020 cpu : usr=0.36%, sys=2.01%, ctx=1298, majf=0, minf=4097 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=5231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 job7: (groupid=0, jobs=1): err= 0: pid=3255928: Sat Jul 13 20:13:12 2024 00:25:27.020 read: IOPS=796, BW=199MiB/s (209MB/s)(2010MiB/10095msec) 00:25:27.020 slat (usec): min=9, max=106307, avg=642.62, stdev=3323.34 00:25:27.020 clat (usec): min=1347, max=494767, avg=79668.68, stdev=51143.41 00:25:27.020 lat (usec): min=1365, max=494780, avg=80311.30, stdev=51582.56 00:25:27.020 clat percentiles (msec): 00:25:27.020 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 27], 20.00th=[ 40], 00:25:27.020 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 81], 00:25:27.020 | 70.00th=[ 94], 80.00th=[ 114], 90.00th=[ 146], 95.00th=[ 167], 00:25:27.020 | 99.00th=[ 220], 99.50th=[ 355], 99.90th=[ 426], 99.95th=[ 493], 00:25:27.020 | 99.99th=[ 493] 00:25:27.020 bw ( KiB/s): min=70144, max=299008, per=10.63%, avg=204137.10, stdev=59103.99, samples=20 00:25:27.020 iops : min= 274, max= 1168, avg=797.40, stdev=230.87, samples=20 00:25:27.020 lat (msec) : 2=0.19%, 4=0.25%, 10=1.78%, 20=4.27%, 50=22.07% 00:25:27.020 lat (msec) : 100=45.55%, 250=25.28%, 500=0.62% 00:25:27.020 cpu : usr=0.40%, sys=2.42%, ctx=2293, majf=0, minf=4097 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=8038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 job8: (groupid=0, jobs=1): err= 0: pid=3255929: Sat Jul 13 20:13:12 2024 00:25:27.020 read: IOPS=883, BW=221MiB/s (232MB/s)(2226MiB/10075msec) 00:25:27.020 slat (usec): min=9, max=144121, avg=995.65, stdev=3493.51 00:25:27.020 clat (msec): min=2, max=325, avg=71.36, stdev=39.05 00:25:27.020 lat (msec): min=2, max=325, avg=72.35, stdev=39.46 00:25:27.020 clat percentiles (msec): 00:25:27.020 | 1.00th=[ 11], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 46], 00:25:27.020 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 67], 00:25:27.020 | 70.00th=[ 79], 80.00th=[ 93], 90.00th=[ 120], 95.00th=[ 148], 00:25:27.020 | 99.00th=[ 215], 99.50th=[ 247], 99.90th=[ 284], 99.95th=[ 284], 00:25:27.020 | 99.99th=[ 326] 00:25:27.020 bw ( KiB/s): min=111616, max=330752, per=11.78%, avg=226314.10, stdev=66167.87, samples=20 00:25:27.020 iops : min= 436, max= 1292, avg=884.00, stdev=258.51, samples=20 00:25:27.020 lat (msec) : 4=0.02%, 10=0.67%, 20=2.65%, 50=28.76%, 100=51.37% 00:25:27.020 lat (msec) : 250=16.05%, 500=0.47% 00:25:27.020 cpu : usr=0.47%, sys=2.93%, ctx=1904, majf=0, minf=4097 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=8904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 job9: (groupid=0, jobs=1): err= 0: pid=3255930: Sat Jul 13 20:13:12 2024 00:25:27.020 read: IOPS=609, BW=152MiB/s (160MB/s)(1536MiB/10076msec) 00:25:27.020 slat (usec): min=9, max=95601, avg=696.38, stdev=3723.78 00:25:27.020 clat (usec): min=1620, max=326531, avg=104163.54, stdev=46333.19 00:25:27.020 lat (usec): min=1686, max=326546, avg=104859.93, stdev=46695.24 00:25:27.020 clat percentiles (msec): 00:25:27.020 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 44], 20.00th=[ 71], 00:25:27.020 | 30.00th=[ 82], 40.00th=[ 93], 50.00th=[ 103], 60.00th=[ 112], 00:25:27.020 | 70.00th=[ 123], 80.00th=[ 142], 90.00th=[ 169], 95.00th=[ 190], 00:25:27.020 | 99.00th=[ 211], 99.50th=[ 224], 99.90th=[ 317], 99.95th=[ 321], 00:25:27.020 | 99.99th=[ 326] 00:25:27.020 bw ( KiB/s): min=106496, max=235520, per=8.10%, avg=155679.40, stdev=34135.29, samples=20 00:25:27.020 iops : min= 416, max= 920, avg=608.10, stdev=133.31, samples=20 00:25:27.020 lat (msec) : 2=0.03%, 4=0.13%, 10=1.51%, 20=2.57%, 50=7.40% 00:25:27.020 lat (msec) : 100=36.73%, 250=51.49%, 500=0.13% 00:25:27.020 cpu : usr=0.37%, sys=1.74%, ctx=2019, majf=0, minf=3722 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=6145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 job10: (groupid=0, jobs=1): err= 0: pid=3255931: Sat Jul 13 20:13:12 2024 00:25:27.020 read: IOPS=651, BW=163MiB/s (171MB/s)(1641MiB/10077msec) 00:25:27.020 slat (usec): min=10, max=81214, avg=961.32, stdev=3463.82 00:25:27.020 clat (usec): min=1659, max=387939, avg=97230.97, stdev=40897.62 00:25:27.020 lat (usec): min=1699, max=387954, avg=98192.29, stdev=41128.66 00:25:27.020 clat percentiles (msec): 00:25:27.020 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 56], 20.00th=[ 73], 00:25:27.020 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 96], 60.00th=[ 102], 00:25:27.020 | 70.00th=[ 109], 80.00th=[ 117], 90.00th=[ 134], 95.00th=[ 165], 00:25:27.020 | 99.00th=[ 209], 99.50th=[ 351], 99.90th=[ 376], 99.95th=[ 384], 00:25:27.020 | 99.99th=[ 388] 00:25:27.020 bw ( KiB/s): min=84480, max=241664, per=8.66%, avg=166400.25, stdev=32424.18, samples=20 00:25:27.020 iops : min= 330, max= 944, avg=649.95, stdev=126.67, samples=20 00:25:27.020 lat (msec) : 2=0.03%, 4=0.12%, 10=0.47%, 20=2.44%, 50=5.36% 00:25:27.020 lat (msec) : 100=48.82%, 250=42.13%, 500=0.62% 00:25:27.020 cpu : usr=0.30%, sys=2.06%, ctx=1833, majf=0, minf=4097 00:25:27.020 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:27.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:27.020 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:27.020 issued rwts: total=6563,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:27.020 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:27.020 00:25:27.020 Run status group 0 (all jobs): 00:25:27.020 READ: bw=1876MiB/s (1967MB/s), 129MiB/s-223MiB/s (136MB/s-233MB/s), io=18.5GiB (19.9GB), run=10029-10108msec 00:25:27.020 00:25:27.020 Disk stats (read/write): 00:25:27.020 nvme0n1: ios=12816/0, merge=0/0, ticks=1242038/0, in_queue=1242038, util=97.21% 00:25:27.020 nvme10n1: ios=10570/0, merge=0/0, ticks=1228488/0, in_queue=1228488, util=97.40% 00:25:27.020 nvme1n1: ios=13351/0, merge=0/0, ticks=1228825/0, in_queue=1228825, util=97.69% 00:25:27.020 nvme2n1: ios=17604/0, merge=0/0, ticks=1233747/0, in_queue=1233747, util=97.83% 00:25:27.020 nvme3n1: ios=14824/0, merge=0/0, ticks=1228570/0, in_queue=1228570, util=97.89% 00:25:27.020 nvme4n1: ios=11492/0, merge=0/0, ticks=1228955/0, in_queue=1228955, util=98.21% 00:25:27.020 nvme5n1: ios=10238/0, merge=0/0, ticks=1230523/0, in_queue=1230523, util=98.37% 00:25:27.020 nvme6n1: ios=15860/0, merge=0/0, ticks=1240713/0, in_queue=1240713, util=98.49% 00:25:27.020 nvme7n1: ios=17596/0, merge=0/0, ticks=1235849/0, in_queue=1235849, util=98.92% 00:25:27.020 nvme8n1: ios=12092/0, merge=0/0, ticks=1248265/0, in_queue=1248265, util=99.10% 00:25:27.020 nvme9n1: ios=12905/0, merge=0/0, ticks=1239224/0, in_queue=1239224, util=99.21% 00:25:27.020 20:13:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:27.020 [global] 00:25:27.020 thread=1 00:25:27.020 invalidate=1 00:25:27.020 rw=randwrite 00:25:27.020 time_based=1 00:25:27.020 runtime=10 00:25:27.020 ioengine=libaio 00:25:27.020 direct=1 00:25:27.020 bs=262144 00:25:27.020 iodepth=64 00:25:27.020 norandommap=1 00:25:27.020 numjobs=1 00:25:27.020 00:25:27.020 [job0] 00:25:27.020 filename=/dev/nvme0n1 00:25:27.020 [job1] 00:25:27.020 filename=/dev/nvme10n1 00:25:27.020 [job2] 00:25:27.020 filename=/dev/nvme1n1 00:25:27.020 [job3] 00:25:27.021 filename=/dev/nvme2n1 00:25:27.021 [job4] 00:25:27.021 filename=/dev/nvme3n1 00:25:27.021 [job5] 00:25:27.021 filename=/dev/nvme4n1 00:25:27.021 [job6] 00:25:27.021 filename=/dev/nvme5n1 00:25:27.021 [job7] 00:25:27.021 filename=/dev/nvme6n1 00:25:27.021 [job8] 00:25:27.021 filename=/dev/nvme7n1 00:25:27.021 [job9] 00:25:27.021 filename=/dev/nvme8n1 00:25:27.021 [job10] 00:25:27.021 filename=/dev/nvme9n1 00:25:27.021 Could not set queue depth (nvme0n1) 00:25:27.021 Could not set queue depth (nvme10n1) 00:25:27.021 Could not set queue depth (nvme1n1) 00:25:27.021 Could not set queue depth (nvme2n1) 00:25:27.021 Could not set queue depth (nvme3n1) 00:25:27.021 Could not set queue depth (nvme4n1) 00:25:27.021 Could not set queue depth (nvme5n1) 00:25:27.021 Could not set queue depth (nvme6n1) 00:25:27.021 Could not set queue depth (nvme7n1) 00:25:27.021 Could not set queue depth (nvme8n1) 00:25:27.021 Could not set queue depth (nvme9n1) 00:25:27.021 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:27.021 fio-3.35 00:25:27.021 Starting 11 threads 00:25:36.996 00:25:36.996 job0: (groupid=0, jobs=1): err= 0: pid=3257089: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=441, BW=110MiB/s (116MB/s)(1119MiB/10124msec); 0 zone resets 00:25:36.996 slat (usec): min=16, max=262908, avg=1351.40, stdev=6238.23 00:25:36.996 clat (msec): min=2, max=746, avg=143.40, stdev=102.27 00:25:36.996 lat (msec): min=2, max=757, avg=144.76, stdev=103.44 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 36], 20.00th=[ 64], 00:25:36.996 | 30.00th=[ 97], 40.00th=[ 123], 50.00th=[ 136], 60.00th=[ 153], 00:25:36.996 | 70.00th=[ 167], 80.00th=[ 190], 90.00th=[ 226], 95.00th=[ 284], 00:25:36.996 | 99.00th=[ 684], 99.50th=[ 735], 99.90th=[ 743], 99.95th=[ 743], 00:25:36.996 | 99.99th=[ 751] 00:25:36.996 bw ( KiB/s): min=40960, max=247808, per=8.77%, avg=112908.90, stdev=48929.74, samples=20 00:25:36.996 iops : min= 160, max= 968, avg=441.05, stdev=191.13, samples=20 00:25:36.996 lat (msec) : 4=0.09%, 10=1.10%, 20=3.15%, 50=11.09%, 100=15.53% 00:25:36.996 lat (msec) : 250=61.94%, 500=5.79%, 750=1.32% 00:25:36.996 cpu : usr=1.17%, sys=1.45%, ctx=3055, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.996 issued rwts: total=0,4474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.996 job1: (groupid=0, jobs=1): err= 0: pid=3257110: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=533, BW=133MiB/s (140MB/s)(1348MiB/10115msec); 0 zone resets 00:25:36.996 slat (usec): min=20, max=51683, avg=1398.17, stdev=3565.25 00:25:36.996 clat (msec): min=3, max=344, avg=118.60, stdev=59.93 00:25:36.996 lat (msec): min=5, max=345, avg=120.00, stdev=60.68 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 17], 5.00th=[ 38], 10.00th=[ 50], 20.00th=[ 71], 00:25:36.996 | 30.00th=[ 84], 40.00th=[ 101], 50.00th=[ 108], 60.00th=[ 117], 00:25:36.996 | 70.00th=[ 138], 80.00th=[ 167], 90.00th=[ 207], 95.00th=[ 232], 00:25:36.996 | 99.00th=[ 296], 99.50th=[ 326], 99.90th=[ 338], 99.95th=[ 342], 00:25:36.996 | 99.99th=[ 347] 00:25:36.996 bw ( KiB/s): min=80384, max=207360, per=10.59%, avg=136425.85, stdev=40701.58, samples=20 00:25:36.996 iops : min= 314, max= 810, avg=532.85, stdev=159.04, samples=20 00:25:36.996 lat (msec) : 4=0.02%, 10=0.28%, 20=1.24%, 50=8.72%, 100=29.66% 00:25:36.996 lat (msec) : 250=56.99%, 500=3.10% 00:25:36.996 cpu : usr=1.63%, sys=1.83%, ctx=2623, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.996 issued rwts: total=0,5392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.996 job2: (groupid=0, jobs=1): err= 0: pid=3257111: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=548, BW=137MiB/s (144MB/s)(1391MiB/10138msec); 0 zone resets 00:25:36.996 slat (usec): min=21, max=181839, avg=1337.54, stdev=4919.11 00:25:36.996 clat (usec): min=1727, max=462642, avg=115177.18, stdev=81160.36 00:25:36.996 lat (usec): min=1821, max=462712, avg=116514.71, stdev=82078.92 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 41], 20.00th=[ 53], 00:25:36.996 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 89], 60.00th=[ 100], 00:25:36.996 | 70.00th=[ 136], 80.00th=[ 176], 90.00th=[ 226], 95.00th=[ 288], 00:25:36.996 | 99.00th=[ 422], 99.50th=[ 435], 99.90th=[ 460], 99.95th=[ 464], 00:25:36.996 | 99.99th=[ 464] 00:25:36.996 bw ( KiB/s): min=53760, max=243712, per=10.94%, avg=140827.80, stdev=62219.46, samples=20 00:25:36.996 iops : min= 210, max= 952, avg=550.10, stdev=243.03, samples=20 00:25:36.996 lat (msec) : 2=0.04%, 4=0.20%, 10=0.68%, 20=1.78%, 50=16.21% 00:25:36.996 lat (msec) : 100=41.35%, 250=33.26%, 500=6.49% 00:25:36.996 cpu : usr=1.43%, sys=1.58%, ctx=2696, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.996 issued rwts: total=0,5565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.996 job3: (groupid=0, jobs=1): err= 0: pid=3257112: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=451, BW=113MiB/s (118MB/s)(1149MiB/10171msec); 0 zone resets 00:25:36.996 slat (usec): min=13, max=60509, avg=1153.75, stdev=3972.82 00:25:36.996 clat (msec): min=2, max=457, avg=140.38, stdev=79.71 00:25:36.996 lat (msec): min=2, max=457, avg=141.53, stdev=80.42 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 36], 20.00th=[ 64], 00:25:36.996 | 30.00th=[ 87], 40.00th=[ 111], 50.00th=[ 140], 60.00th=[ 163], 00:25:36.996 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 259], 00:25:36.996 | 99.00th=[ 351], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 422], 00:25:36.996 | 99.99th=[ 460] 00:25:36.996 bw ( KiB/s): min=68096, max=211968, per=9.01%, avg=116070.40, stdev=33647.19, samples=20 00:25:36.996 iops : min= 266, max= 828, avg=453.40, stdev=131.43, samples=20 00:25:36.996 lat (msec) : 4=0.09%, 10=1.31%, 20=4.50%, 50=9.68%, 100=19.84% 00:25:36.996 lat (msec) : 250=57.62%, 500=6.96% 00:25:36.996 cpu : usr=1.28%, sys=1.60%, ctx=3234, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.996 issued rwts: total=0,4597,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.996 job4: (groupid=0, jobs=1): err= 0: pid=3257113: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=502, BW=126MiB/s (132MB/s)(1272MiB/10119msec); 0 zone resets 00:25:36.996 slat (usec): min=15, max=63407, avg=1269.44, stdev=3633.20 00:25:36.996 clat (usec): min=1883, max=283076, avg=126012.29, stdev=59832.53 00:25:36.996 lat (usec): min=1943, max=286606, avg=127281.73, stdev=60542.88 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 11], 5.00th=[ 31], 10.00th=[ 47], 20.00th=[ 66], 00:25:36.996 | 30.00th=[ 88], 40.00th=[ 114], 50.00th=[ 130], 60.00th=[ 142], 00:25:36.996 | 70.00th=[ 159], 80.00th=[ 176], 90.00th=[ 209], 95.00th=[ 228], 00:25:36.996 | 99.00th=[ 259], 99.50th=[ 271], 99.90th=[ 279], 99.95th=[ 284], 00:25:36.996 | 99.99th=[ 284] 00:25:36.996 bw ( KiB/s): min=83800, max=197632, per=9.98%, avg=128554.80, stdev=34627.07, samples=20 00:25:36.996 iops : min= 327, max= 772, avg=502.15, stdev=135.29, samples=20 00:25:36.996 lat (msec) : 2=0.02%, 4=0.08%, 10=0.85%, 20=1.42%, 50=8.95% 00:25:36.996 lat (msec) : 100=23.81%, 250=62.96%, 500=1.93% 00:25:36.996 cpu : usr=1.41%, sys=1.80%, ctx=3006, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.996 issued rwts: total=0,5086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.996 job5: (groupid=0, jobs=1): err= 0: pid=3257114: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=497, BW=124MiB/s (130MB/s)(1266MiB/10177msec); 0 zone resets 00:25:36.996 slat (usec): min=18, max=50271, avg=1226.28, stdev=3883.78 00:25:36.996 clat (msec): min=3, max=462, avg=127.38, stdev=79.77 00:25:36.996 lat (msec): min=3, max=462, avg=128.60, stdev=80.73 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 13], 5.00th=[ 28], 10.00th=[ 45], 20.00th=[ 67], 00:25:36.996 | 30.00th=[ 82], 40.00th=[ 102], 50.00th=[ 110], 60.00th=[ 126], 00:25:36.996 | 70.00th=[ 142], 80.00th=[ 171], 90.00th=[ 228], 95.00th=[ 313], 00:25:36.996 | 99.00th=[ 388], 99.50th=[ 414], 99.90th=[ 460], 99.95th=[ 464], 00:25:36.996 | 99.99th=[ 464] 00:25:36.996 bw ( KiB/s): min=40960, max=205312, per=9.94%, avg=127941.15, stdev=45615.17, samples=20 00:25:36.996 iops : min= 160, max= 802, avg=499.70, stdev=178.15, samples=20 00:25:36.996 lat (msec) : 4=0.04%, 10=0.55%, 20=2.25%, 50=8.89%, 100=27.58% 00:25:36.996 lat (msec) : 250=53.18%, 500=7.51% 00:25:36.996 cpu : usr=1.40%, sys=1.63%, ctx=3155, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.996 issued rwts: total=0,5062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.996 job6: (groupid=0, jobs=1): err= 0: pid=3257115: Sat Jul 13 20:13:23 2024 00:25:36.996 write: IOPS=422, BW=106MiB/s (111MB/s)(1072MiB/10155msec); 0 zone resets 00:25:36.996 slat (usec): min=16, max=84275, avg=1647.17, stdev=5048.09 00:25:36.996 clat (msec): min=2, max=463, avg=149.50, stdev=92.43 00:25:36.996 lat (msec): min=2, max=463, avg=151.15, stdev=93.66 00:25:36.996 clat percentiles (msec): 00:25:36.996 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 42], 20.00th=[ 70], 00:25:36.996 | 30.00th=[ 78], 40.00th=[ 91], 50.00th=[ 146], 60.00th=[ 180], 00:25:36.996 | 70.00th=[ 205], 80.00th=[ 230], 90.00th=[ 271], 95.00th=[ 305], 00:25:36.996 | 99.00th=[ 405], 99.50th=[ 439], 99.90th=[ 460], 99.95th=[ 464], 00:25:36.996 | 99.99th=[ 464] 00:25:36.996 bw ( KiB/s): min=43008, max=234496, per=8.40%, avg=108139.50, stdev=52432.92, samples=20 00:25:36.996 iops : min= 168, max= 916, avg=422.40, stdev=204.83, samples=20 00:25:36.996 lat (msec) : 4=0.26%, 10=0.96%, 20=3.36%, 50=7.79%, 100=29.65% 00:25:36.996 lat (msec) : 250=43.67%, 500=14.32% 00:25:36.996 cpu : usr=1.19%, sys=1.35%, ctx=2532, majf=0, minf=1 00:25:36.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:25:36.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.997 issued rwts: total=0,4287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.997 job7: (groupid=0, jobs=1): err= 0: pid=3257116: Sat Jul 13 20:13:23 2024 00:25:36.997 write: IOPS=397, BW=99.4MiB/s (104MB/s)(1009MiB/10152msec); 0 zone resets 00:25:36.997 slat (usec): min=23, max=134142, avg=1400.45, stdev=4639.19 00:25:36.997 clat (msec): min=2, max=457, avg=159.44, stdev=73.13 00:25:36.997 lat (msec): min=2, max=528, avg=160.84, stdev=74.00 00:25:36.997 clat percentiles (msec): 00:25:36.997 | 1.00th=[ 18], 5.00th=[ 40], 10.00th=[ 60], 20.00th=[ 94], 00:25:36.997 | 30.00th=[ 127], 40.00th=[ 153], 50.00th=[ 165], 60.00th=[ 178], 00:25:36.997 | 70.00th=[ 192], 80.00th=[ 211], 90.00th=[ 228], 95.00th=[ 251], 00:25:36.997 | 99.00th=[ 426], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 451], 00:25:36.997 | 99.99th=[ 460] 00:25:36.997 bw ( KiB/s): min=67584, max=137728, per=7.90%, avg=101700.95, stdev=18067.85, samples=20 00:25:36.997 iops : min= 264, max= 538, avg=397.25, stdev=70.60, samples=20 00:25:36.997 lat (msec) : 4=0.02%, 10=0.32%, 20=1.29%, 50=5.57%, 100=14.32% 00:25:36.997 lat (msec) : 250=73.34%, 500=5.13% 00:25:36.997 cpu : usr=1.07%, sys=1.45%, ctx=2655, majf=0, minf=1 00:25:36.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:36.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.997 issued rwts: total=0,4036,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.997 job8: (groupid=0, jobs=1): err= 0: pid=3257117: Sat Jul 13 20:13:23 2024 00:25:36.997 write: IOPS=421, BW=105MiB/s (110MB/s)(1065MiB/10114msec); 0 zone resets 00:25:36.997 slat (usec): min=20, max=136620, avg=1849.24, stdev=5081.46 00:25:36.997 clat (msec): min=2, max=478, avg=150.05, stdev=76.89 00:25:36.997 lat (msec): min=2, max=485, avg=151.90, stdev=77.84 00:25:36.997 clat percentiles (msec): 00:25:36.997 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 42], 20.00th=[ 95], 00:25:36.997 | 30.00th=[ 117], 40.00th=[ 130], 50.00th=[ 148], 60.00th=[ 161], 00:25:36.997 | 70.00th=[ 182], 80.00th=[ 203], 90.00th=[ 234], 95.00th=[ 288], 00:25:36.997 | 99.00th=[ 393], 99.50th=[ 456], 99.90th=[ 472], 99.95th=[ 477], 00:25:36.997 | 99.99th=[ 481] 00:25:36.997 bw ( KiB/s): min=51200, max=215040, per=8.34%, avg=107422.30, stdev=35492.89, samples=20 00:25:36.997 iops : min= 200, max= 840, avg=419.55, stdev=138.65, samples=20 00:25:36.997 lat (msec) : 4=0.21%, 10=1.17%, 20=2.28%, 50=7.89%, 100=9.74% 00:25:36.997 lat (msec) : 250=71.19%, 500=7.51% 00:25:36.997 cpu : usr=1.30%, sys=1.35%, ctx=2169, majf=0, minf=1 00:25:36.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:36.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.997 issued rwts: total=0,4259,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.997 job9: (groupid=0, jobs=1): err= 0: pid=3257118: Sat Jul 13 20:13:23 2024 00:25:36.997 write: IOPS=355, BW=88.8MiB/s (93.1MB/s)(898MiB/10110msec); 0 zone resets 00:25:36.997 slat (usec): min=15, max=116649, avg=2194.57, stdev=5779.49 00:25:36.997 clat (msec): min=6, max=465, avg=177.96, stdev=75.67 00:25:36.997 lat (msec): min=6, max=471, avg=180.16, stdev=76.80 00:25:36.997 clat percentiles (msec): 00:25:36.997 | 1.00th=[ 23], 5.00th=[ 47], 10.00th=[ 79], 20.00th=[ 118], 00:25:36.997 | 30.00th=[ 146], 40.00th=[ 163], 50.00th=[ 176], 60.00th=[ 190], 00:25:36.997 | 70.00th=[ 211], 80.00th=[ 236], 90.00th=[ 259], 95.00th=[ 279], 00:25:36.997 | 99.00th=[ 422], 99.50th=[ 439], 99.90th=[ 456], 99.95th=[ 460], 00:25:36.997 | 99.99th=[ 468] 00:25:36.997 bw ( KiB/s): min=59392, max=148992, per=7.01%, avg=90283.00, stdev=22951.24, samples=20 00:25:36.997 iops : min= 232, max= 582, avg=352.65, stdev=89.66, samples=20 00:25:36.997 lat (msec) : 10=0.11%, 20=0.67%, 50=4.82%, 100=8.02%, 250=71.75% 00:25:36.997 lat (msec) : 500=14.62% 00:25:36.997 cpu : usr=1.10%, sys=1.13%, ctx=1725, majf=0, minf=1 00:25:36.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:25:36.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.997 issued rwts: total=0,3590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.997 job10: (groupid=0, jobs=1): err= 0: pid=3257119: Sat Jul 13 20:13:23 2024 00:25:36.997 write: IOPS=478, BW=120MiB/s (125MB/s)(1210MiB/10121msec); 0 zone resets 00:25:36.997 slat (usec): min=23, max=125454, avg=1780.82, stdev=4341.23 00:25:36.997 clat (msec): min=4, max=367, avg=131.74, stdev=56.28 00:25:36.997 lat (msec): min=6, max=367, avg=133.53, stdev=56.94 00:25:36.997 clat percentiles (msec): 00:25:36.997 | 1.00th=[ 15], 5.00th=[ 39], 10.00th=[ 68], 20.00th=[ 78], 00:25:36.997 | 30.00th=[ 107], 40.00th=[ 117], 50.00th=[ 130], 60.00th=[ 144], 00:25:36.997 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 209], 95.00th=[ 232], 00:25:36.997 | 99.00th=[ 255], 99.50th=[ 313], 99.90th=[ 359], 99.95th=[ 363], 00:25:36.997 | 99.99th=[ 368] 00:25:36.997 bw ( KiB/s): min=69632, max=184320, per=9.50%, avg=122316.80, stdev=36450.32, samples=20 00:25:36.997 iops : min= 272, max= 720, avg=477.80, stdev=142.38, samples=20 00:25:36.997 lat (msec) : 10=0.29%, 20=1.38%, 50=6.18%, 100=20.00%, 250=70.52% 00:25:36.997 lat (msec) : 500=1.63% 00:25:36.997 cpu : usr=1.49%, sys=1.47%, ctx=1971, majf=0, minf=1 00:25:36.997 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:36.997 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.997 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:36.997 issued rwts: total=0,4841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.997 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:36.997 00:25:36.997 Run status group 0 (all jobs): 00:25:36.997 WRITE: bw=1257MiB/s (1319MB/s), 88.8MiB/s-137MiB/s (93.1MB/s-144MB/s), io=12.5GiB (13.4GB), run=10110-10177msec 00:25:36.997 00:25:36.997 Disk stats (read/write): 00:25:36.997 nvme0n1: ios=49/8740, merge=0/0, ticks=90/1226463, in_queue=1226553, util=97.69% 00:25:36.997 nvme10n1: ios=43/10609, merge=0/0, ticks=51/1214107, in_queue=1214158, util=97.49% 00:25:36.997 nvme1n1: ios=46/10932, merge=0/0, ticks=2773/1202544, in_queue=1205317, util=100.00% 00:25:36.997 nvme2n1: ios=35/9186, merge=0/0, ticks=1296/1259833, in_queue=1261129, util=100.00% 00:25:36.997 nvme3n1: ios=20/9977, merge=0/0, ticks=114/1224190, in_queue=1224304, util=98.29% 00:25:36.997 nvme4n1: ios=0/10114, merge=0/0, ticks=0/1255460, in_queue=1255460, util=98.19% 00:25:36.997 nvme5n1: ios=41/8400, merge=0/0, ticks=1419/1214182, in_queue=1215601, util=100.00% 00:25:36.997 nvme6n1: ios=42/7847, merge=0/0, ticks=3085/1222590, in_queue=1225675, util=100.00% 00:25:36.997 nvme7n1: ios=0/8326, merge=0/0, ticks=0/1213152, in_queue=1213152, util=98.79% 00:25:36.997 nvme8n1: ios=0/6956, merge=0/0, ticks=0/1211997, in_queue=1211997, util=98.98% 00:25:36.997 nvme9n1: ios=43/9473, merge=0/0, ticks=2450/1199506, in_queue=1201956, util=100.00% 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:36.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.997 20:13:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:36.997 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:36.997 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.997 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:36.998 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:37.255 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:37.512 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:37.513 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.513 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.513 20:13:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.513 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.513 20:13:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:37.513 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.513 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:37.771 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:25:37.771 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:37.772 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:37.772 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.772 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:37.772 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.772 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.772 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:38.030 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.030 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:38.288 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.288 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:38.546 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.546 20:13:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:38.546 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.546 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:38.804 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:38.804 rmmod nvme_tcp 00:25:38.804 rmmod nvme_fabrics 00:25:38.804 rmmod nvme_keyring 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3251659 ']' 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3251659 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 3251659 ']' 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 3251659 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3251659 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:38.804 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3251659' 00:25:38.804 killing process with pid 3251659 00:25:38.805 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 3251659 00:25:38.805 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 3251659 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.370 20:13:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.275 20:13:28 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:41.275 00:25:41.275 real 1m0.320s 00:25:41.275 user 3m19.701s 00:25:41.275 sys 0m24.663s 00:25:41.275 20:13:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:41.275 20:13:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.275 ************************************ 00:25:41.275 END TEST nvmf_multiconnection 00:25:41.275 ************************************ 00:25:41.275 20:13:28 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:41.275 20:13:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:41.275 20:13:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:41.275 20:13:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:41.275 ************************************ 00:25:41.275 START TEST nvmf_initiator_timeout 00:25:41.275 ************************************ 00:25:41.275 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:41.533 * Looking for test storage... 00:25:41.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.533 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:41.534 20:13:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:43.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:43.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.441 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:43.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:43.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:43.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:25:43.442 00:25:43.442 --- 10.0.0.2 ping statistics --- 00:25:43.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.442 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:25:43.442 00:25:43.442 --- 10.0.0.1 ping statistics --- 00:25:43.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.442 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:43.442 20:13:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3260431 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3260431 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 3260431 ']' 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:43.442 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.442 [2024-07-13 20:13:31.071576] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:25:43.442 [2024-07-13 20:13:31.071669] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:43.701 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.701 [2024-07-13 20:13:31.137272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:43.701 [2024-07-13 20:13:31.226592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:43.701 [2024-07-13 20:13:31.226653] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:43.701 [2024-07-13 20:13:31.226666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:43.701 [2024-07-13 20:13:31.226677] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:43.701 [2024-07-13 20:13:31.226702] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:43.701 [2024-07-13 20:13:31.226798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.701 [2024-07-13 20:13:31.226873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:43.701 [2024-07-13 20:13:31.226931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:43.701 [2024-07-13 20:13:31.226935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.701 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:43.701 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:43.701 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:43.701 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.701 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 Malloc0 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 Delay0 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 [2024-07-13 20:13:31.414639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:43.962 [2024-07-13 20:13:31.442941] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.962 20:13:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:44.532 20:13:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:44.532 20:13:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:25:44.532 20:13:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:25:44.532 20:13:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:25:44.532 20:13:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3260754 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:47.067 20:13:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:47.067 [global] 00:25:47.067 thread=1 00:25:47.067 invalidate=1 00:25:47.067 rw=write 00:25:47.067 time_based=1 00:25:47.067 runtime=60 00:25:47.067 ioengine=libaio 00:25:47.067 direct=1 00:25:47.067 bs=4096 00:25:47.067 iodepth=1 00:25:47.067 norandommap=0 00:25:47.067 numjobs=1 00:25:47.067 00:25:47.067 verify_dump=1 00:25:47.067 verify_backlog=512 00:25:47.067 verify_state_save=0 00:25:47.067 do_verify=1 00:25:47.067 verify=crc32c-intel 00:25:47.067 [job0] 00:25:47.067 filename=/dev/nvme0n1 00:25:47.067 Could not set queue depth (nvme0n1) 00:25:47.067 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:47.067 fio-3.35 00:25:47.067 Starting 1 thread 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.636 true 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.636 true 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.636 true 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:49.636 true 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.636 20:13:37 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.925 true 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.925 true 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.925 true 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:52.925 true 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:52.925 20:13:40 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3260754 00:26:49.160 00:26:49.160 job0: (groupid=0, jobs=1): err= 0: pid=3260841: Sat Jul 13 20:14:34 2024 00:26:49.160 read: IOPS=95, BW=384KiB/s (393kB/s)(22.5MiB/60013msec) 00:26:49.160 slat (usec): min=5, max=7460, avg=16.73, stdev=133.14 00:26:49.160 clat (usec): min=375, max=41284k, avg=10033.27, stdev=544252.32 00:26:49.160 lat (usec): min=381, max=41285k, avg=10050.00, stdev=544252.42 00:26:49.160 clat percentiles (usec): 00:26:49.160 | 1.00th=[ 429], 5.00th=[ 445], 10.00th=[ 453], 00:26:49.160 | 20.00th=[ 469], 30.00th=[ 490], 40.00th=[ 502], 00:26:49.160 | 50.00th=[ 510], 60.00th=[ 519], 70.00th=[ 529], 00:26:49.160 | 80.00th=[ 545], 90.00th=[ 562], 95.00th=[ 41157], 00:26:49.160 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:49.160 | 99.95th=[ 44827], 99.99th=[17112761] 00:26:49.160 write: IOPS=102, BW=410KiB/s (419kB/s)(24.0MiB/60013msec); 0 zone resets 00:26:49.160 slat (usec): min=7, max=31424, avg=27.40, stdev=401.73 00:26:49.160 clat (usec): min=208, max=1242, avg=316.82, stdev=52.29 00:26:49.160 lat (usec): min=227, max=31803, avg=344.22, stdev=406.73 00:26:49.160 clat percentiles (usec): 00:26:49.160 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 269], 00:26:49.160 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:26:49.160 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 404], 00:26:49.160 | 99.00th=[ 441], 99.50th=[ 449], 99.90th=[ 478], 99.95th=[ 482], 00:26:49.160 | 99.99th=[ 1237] 00:26:49.160 bw ( KiB/s): min= 3240, max= 4952, per=100.00%, avg=4096.00, stdev=366.02, samples=12 00:26:49.160 iops : min= 810, max= 1238, avg=1024.00, stdev=91.50, samples=12 00:26:49.160 lat (usec) : 250=6.28%, 500=63.11%, 750=27.57%, 1000=0.21% 00:26:49.160 lat (msec) : 2=0.03%, 50=2.80%, >=2000=0.01% 00:26:49.160 cpu : usr=0.28%, sys=0.48%, ctx=11905, majf=0, minf=2 00:26:49.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:49.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:49.160 issued rwts: total=5755,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:49.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:49.160 00:26:49.160 Run status group 0 (all jobs): 00:26:49.160 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=22.5MiB (23.6MB), run=60013-60013msec 00:26:49.160 WRITE: bw=410KiB/s (419kB/s), 410KiB/s-410KiB/s (419kB/s-419kB/s), io=24.0MiB (25.2MB), run=60013-60013msec 00:26:49.160 00:26:49.160 Disk stats (read/write): 00:26:49.160 nvme0n1: ios=5808/6144, merge=0/0, ticks=16536/1733, in_queue=18269, util=99.83% 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:49.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:49.160 nvmf hotplug test: fio successful as expected 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.160 rmmod nvme_tcp 00:26:49.160 rmmod nvme_fabrics 00:26:49.160 rmmod nvme_keyring 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3260431 ']' 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3260431 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 3260431 ']' 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 3260431 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3260431 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:49.160 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3260431' 00:26:49.160 killing process with pid 3260431 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 3260431 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 3260431 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.161 20:14:34 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.419 20:14:37 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.419 00:26:49.419 real 1m8.103s 00:26:49.419 user 4m10.889s 00:26:49.419 sys 0m6.667s 00:26:49.419 20:14:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:49.419 20:14:37 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:49.419 ************************************ 00:26:49.419 END TEST nvmf_initiator_timeout 00:26:49.419 ************************************ 00:26:49.419 20:14:37 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:49.419 20:14:37 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:49.419 20:14:37 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:49.419 20:14:37 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.419 20:14:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:51.951 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:51.951 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.951 20:14:39 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:51.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:51.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:51.952 20:14:39 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:51.952 20:14:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:51.952 20:14:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:51.952 20:14:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.952 ************************************ 00:26:51.952 START TEST nvmf_perf_adq 00:26:51.952 ************************************ 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:51.952 * Looking for test storage... 00:26:51.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.952 20:14:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:53.858 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:53.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:53.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:53.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:53.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:53.859 20:14:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:54.119 20:14:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:56.057 20:14:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:01.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:01.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:01.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:01.349 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:27:01.349 00:27:01.349 --- 10.0.0.2 ping statistics --- 00:27:01.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.349 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:01.349 00:27:01.349 --- 10.0.0.1 ping statistics --- 00:27:01.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.349 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:01.349 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3272402 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3272402 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3272402 ']' 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:01.350 20:14:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.350 [2024-07-13 20:14:48.847529] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:01.350 [2024-07-13 20:14:48.847607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.350 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.350 [2024-07-13 20:14:48.912526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.350 [2024-07-13 20:14:48.998714] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.350 [2024-07-13 20:14:48.998765] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.350 [2024-07-13 20:14:48.998793] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.350 [2024-07-13 20:14:48.998805] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.350 [2024-07-13 20:14:48.998814] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.350 [2024-07-13 20:14:48.998902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.350 [2024-07-13 20:14:48.998967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.350 [2024-07-13 20:14:48.999034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.350 [2024-07-13 20:14:48.999037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.609 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.610 [2024-07-13 20:14:49.231797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.610 Malloc1 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.610 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.870 [2024-07-13 20:14:49.285275] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3272483 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:01.870 20:14:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:01.870 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.775 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:03.775 20:14:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.775 20:14:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:03.775 20:14:51 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.775 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:03.775 "tick_rate": 2700000000, 00:27:03.775 "poll_groups": [ 00:27:03.775 { 00:27:03.775 "name": "nvmf_tgt_poll_group_000", 00:27:03.775 "admin_qpairs": 1, 00:27:03.775 "io_qpairs": 1, 00:27:03.775 "current_admin_qpairs": 1, 00:27:03.775 "current_io_qpairs": 1, 00:27:03.775 "pending_bdev_io": 0, 00:27:03.775 "completed_nvme_io": 19882, 00:27:03.775 "transports": [ 00:27:03.775 { 00:27:03.775 "trtype": "TCP" 00:27:03.775 } 00:27:03.775 ] 00:27:03.775 }, 00:27:03.775 { 00:27:03.775 "name": "nvmf_tgt_poll_group_001", 00:27:03.775 "admin_qpairs": 0, 00:27:03.775 "io_qpairs": 1, 00:27:03.775 "current_admin_qpairs": 0, 00:27:03.775 "current_io_qpairs": 1, 00:27:03.775 "pending_bdev_io": 0, 00:27:03.775 "completed_nvme_io": 20189, 00:27:03.775 "transports": [ 00:27:03.775 { 00:27:03.775 "trtype": "TCP" 00:27:03.775 } 00:27:03.775 ] 00:27:03.775 }, 00:27:03.775 { 00:27:03.775 "name": "nvmf_tgt_poll_group_002", 00:27:03.776 "admin_qpairs": 0, 00:27:03.776 "io_qpairs": 1, 00:27:03.776 "current_admin_qpairs": 0, 00:27:03.776 "current_io_qpairs": 1, 00:27:03.776 "pending_bdev_io": 0, 00:27:03.776 "completed_nvme_io": 16200, 00:27:03.776 "transports": [ 00:27:03.776 { 00:27:03.776 "trtype": "TCP" 00:27:03.776 } 00:27:03.776 ] 00:27:03.776 }, 00:27:03.776 { 00:27:03.776 "name": "nvmf_tgt_poll_group_003", 00:27:03.776 "admin_qpairs": 0, 00:27:03.776 "io_qpairs": 1, 00:27:03.776 "current_admin_qpairs": 0, 00:27:03.776 "current_io_qpairs": 1, 00:27:03.776 "pending_bdev_io": 0, 00:27:03.776 "completed_nvme_io": 20594, 00:27:03.776 "transports": [ 00:27:03.776 { 00:27:03.776 "trtype": "TCP" 00:27:03.776 } 00:27:03.776 ] 00:27:03.776 } 00:27:03.776 ] 00:27:03.776 }' 00:27:03.776 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:03.776 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:03.776 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:03.776 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:03.776 20:14:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3272483 00:27:11.889 Initializing NVMe Controllers 00:27:11.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:11.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:11.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:11.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:11.889 Initialization complete. Launching workers. 00:27:11.889 ======================================================== 00:27:11.889 Latency(us) 00:27:11.889 Device Information : IOPS MiB/s Average min max 00:27:11.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10843.40 42.36 5903.24 1932.66 8089.12 00:27:11.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10564.50 41.27 6059.31 2035.76 9993.81 00:27:11.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8536.10 33.34 7497.29 2135.15 12703.42 00:27:11.889 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10459.20 40.86 6120.72 1797.84 10274.86 00:27:11.889 ======================================================== 00:27:11.889 Total : 40403.20 157.82 6337.13 1797.84 12703.42 00:27:11.889 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.889 rmmod nvme_tcp 00:27:11.889 rmmod nvme_fabrics 00:27:11.889 rmmod nvme_keyring 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3272402 ']' 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3272402 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3272402 ']' 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3272402 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3272402 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3272402' 00:27:11.889 killing process with pid 3272402 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3272402 00:27:11.889 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3272402 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.148 20:14:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.683 20:15:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.683 20:15:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:14.683 20:15:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:14.942 20:15:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:16.842 20:15:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.143 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.143 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:27:22.144 00:27:22.144 --- 10.0.0.2 ping statistics --- 00:27:22.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.144 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:22.144 00:27:22.144 --- 10.0.0.1 ping statistics --- 00:27:22.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.144 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:22.144 net.core.busy_poll = 1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:22.144 net.core.busy_read = 1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:22.144 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3275706 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3275706 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 3275706 ']' 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:22.145 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.145 [2024-07-13 20:15:09.676514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:22.145 [2024-07-13 20:15:09.676611] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.145 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.145 [2024-07-13 20:15:09.741926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.403 [2024-07-13 20:15:09.832053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.403 [2024-07-13 20:15:09.832112] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.403 [2024-07-13 20:15:09.832141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.403 [2024-07-13 20:15:09.832154] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.403 [2024-07-13 20:15:09.832164] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.404 [2024-07-13 20:15:09.832234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.404 [2024-07-13 20:15:09.832295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.404 [2024-07-13 20:15:09.832372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.404 [2024-07-13 20:15:09.832375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.404 20:15:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.662 [2024-07-13 20:15:10.086531] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.662 Malloc1 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:22.662 [2024-07-13 20:15:10.140038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3275742 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:22.662 20:15:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:22.662 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:24.564 "tick_rate": 2700000000, 00:27:24.564 "poll_groups": [ 00:27:24.564 { 00:27:24.564 "name": "nvmf_tgt_poll_group_000", 00:27:24.564 "admin_qpairs": 1, 00:27:24.564 "io_qpairs": 2, 00:27:24.564 "current_admin_qpairs": 1, 00:27:24.564 "current_io_qpairs": 2, 00:27:24.564 "pending_bdev_io": 0, 00:27:24.564 "completed_nvme_io": 22399, 00:27:24.564 "transports": [ 00:27:24.564 { 00:27:24.564 "trtype": "TCP" 00:27:24.564 } 00:27:24.564 ] 00:27:24.564 }, 00:27:24.564 { 00:27:24.564 "name": "nvmf_tgt_poll_group_001", 00:27:24.564 "admin_qpairs": 0, 00:27:24.564 "io_qpairs": 2, 00:27:24.564 "current_admin_qpairs": 0, 00:27:24.564 "current_io_qpairs": 2, 00:27:24.564 "pending_bdev_io": 0, 00:27:24.564 "completed_nvme_io": 26905, 00:27:24.564 "transports": [ 00:27:24.564 { 00:27:24.564 "trtype": "TCP" 00:27:24.564 } 00:27:24.564 ] 00:27:24.564 }, 00:27:24.564 { 00:27:24.564 "name": "nvmf_tgt_poll_group_002", 00:27:24.564 "admin_qpairs": 0, 00:27:24.564 "io_qpairs": 0, 00:27:24.564 "current_admin_qpairs": 0, 00:27:24.564 "current_io_qpairs": 0, 00:27:24.564 "pending_bdev_io": 0, 00:27:24.564 "completed_nvme_io": 0, 00:27:24.564 "transports": [ 00:27:24.564 { 00:27:24.564 "trtype": "TCP" 00:27:24.564 } 00:27:24.564 ] 00:27:24.564 }, 00:27:24.564 { 00:27:24.564 "name": "nvmf_tgt_poll_group_003", 00:27:24.564 "admin_qpairs": 0, 00:27:24.564 "io_qpairs": 0, 00:27:24.564 "current_admin_qpairs": 0, 00:27:24.564 "current_io_qpairs": 0, 00:27:24.564 "pending_bdev_io": 0, 00:27:24.564 "completed_nvme_io": 0, 00:27:24.564 "transports": [ 00:27:24.564 { 00:27:24.564 "trtype": "TCP" 00:27:24.564 } 00:27:24.564 ] 00:27:24.564 } 00:27:24.564 ] 00:27:24.564 }' 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:24.564 20:15:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3275742 00:27:32.669 Initializing NVMe Controllers 00:27:32.669 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:32.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:32.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:32.669 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:32.669 Initialization complete. Launching workers. 00:27:32.669 ======================================================== 00:27:32.669 Latency(us) 00:27:32.669 Device Information : IOPS MiB/s Average min max 00:27:32.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6032.80 23.57 10632.76 1767.60 56693.88 00:27:32.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5774.30 22.56 11086.59 1932.75 58652.17 00:27:32.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7163.50 27.98 8938.09 1555.16 53615.28 00:27:32.669 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7013.80 27.40 9155.58 1721.40 53021.20 00:27:32.669 ======================================================== 00:27:32.669 Total : 25984.39 101.50 9867.69 1555.16 58652.17 00:27:32.669 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.669 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.928 rmmod nvme_tcp 00:27:32.928 rmmod nvme_fabrics 00:27:32.928 rmmod nvme_keyring 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3275706 ']' 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3275706 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 3275706 ']' 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 3275706 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3275706 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3275706' 00:27:32.928 killing process with pid 3275706 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 3275706 00:27:32.928 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 3275706 00:27:33.187 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.187 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.187 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.188 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.188 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.188 20:15:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.188 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.188 20:15:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.092 20:15:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:35.092 20:15:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:35.092 00:27:35.092 real 0m43.629s 00:27:35.092 user 2m29.314s 00:27:35.092 sys 0m13.320s 00:27:35.092 20:15:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:35.092 20:15:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.092 ************************************ 00:27:35.092 END TEST nvmf_perf_adq 00:27:35.092 ************************************ 00:27:35.092 20:15:22 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:35.092 20:15:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:35.092 20:15:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.092 20:15:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.092 ************************************ 00:27:35.092 START TEST nvmf_shutdown 00:27:35.092 ************************************ 00:27:35.092 20:15:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:35.350 * Looking for test storage... 00:27:35.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.350 ************************************ 00:27:35.350 START TEST nvmf_shutdown_tc1 00:27:35.350 ************************************ 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.350 20:15:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:37.249 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:37.249 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:37.249 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.249 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:37.249 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:37.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:37.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:27:37.250 00:27:37.250 --- 10.0.0.2 ping statistics --- 00:27:37.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.250 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:37.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:37.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:37.250 00:27:37.250 --- 10.0.0.1 ping statistics --- 00:27:37.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:37.250 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3278891 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3278891 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3278891 ']' 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:37.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.250 20:15:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.250 [2024-07-13 20:15:24.891298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:37.250 [2024-07-13 20:15:24.891374] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:37.509 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.509 [2024-07-13 20:15:24.960659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:37.509 [2024-07-13 20:15:25.053986] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:37.509 [2024-07-13 20:15:25.054040] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:37.509 [2024-07-13 20:15:25.054058] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:37.509 [2024-07-13 20:15:25.054072] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:37.509 [2024-07-13 20:15:25.054085] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:37.509 [2024-07-13 20:15:25.054200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:37.509 [2024-07-13 20:15:25.054279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:37.509 [2024-07-13 20:15:25.054485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:37.509 [2024-07-13 20:15:25.054488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.768 [2024-07-13 20:15:25.222662] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.768 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:37.768 Malloc1 00:27:37.768 [2024-07-13 20:15:25.308417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.768 Malloc2 00:27:37.768 Malloc3 00:27:38.026 Malloc4 00:27:38.026 Malloc5 00:27:38.026 Malloc6 00:27:38.026 Malloc7 00:27:38.026 Malloc8 00:27:38.285 Malloc9 00:27:38.285 Malloc10 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3279069 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3279069 /var/tmp/bdevperf.sock 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 3279069 ']' 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:38.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.285 "hdgst": ${hdgst:-false}, 00:27:38.285 "ddgst": ${ddgst:-false} 00:27:38.285 }, 00:27:38.285 "method": "bdev_nvme_attach_controller" 00:27:38.285 } 00:27:38.285 EOF 00:27:38.285 )") 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.285 "hdgst": ${hdgst:-false}, 00:27:38.285 "ddgst": ${ddgst:-false} 00:27:38.285 }, 00:27:38.285 "method": "bdev_nvme_attach_controller" 00:27:38.285 } 00:27:38.285 EOF 00:27:38.285 )") 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.285 "hdgst": ${hdgst:-false}, 00:27:38.285 "ddgst": ${ddgst:-false} 00:27:38.285 }, 00:27:38.285 "method": "bdev_nvme_attach_controller" 00:27:38.285 } 00:27:38.285 EOF 00:27:38.285 )") 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.285 "hdgst": ${hdgst:-false}, 00:27:38.285 "ddgst": ${ddgst:-false} 00:27:38.285 }, 00:27:38.285 "method": "bdev_nvme_attach_controller" 00:27:38.285 } 00:27:38.285 EOF 00:27:38.285 )") 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.285 "hdgst": ${hdgst:-false}, 00:27:38.285 "ddgst": ${ddgst:-false} 00:27:38.285 }, 00:27:38.285 "method": "bdev_nvme_attach_controller" 00:27:38.285 } 00:27:38.285 EOF 00:27:38.285 )") 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.285 "hdgst": ${hdgst:-false}, 00:27:38.285 "ddgst": ${ddgst:-false} 00:27:38.285 }, 00:27:38.285 "method": "bdev_nvme_attach_controller" 00:27:38.285 } 00:27:38.285 EOF 00:27:38.285 )") 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.285 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.285 { 00:27:38.285 "params": { 00:27:38.285 "name": "Nvme$subsystem", 00:27:38.285 "trtype": "$TEST_TRANSPORT", 00:27:38.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.285 "adrfam": "ipv4", 00:27:38.285 "trsvcid": "$NVMF_PORT", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.286 "hdgst": ${hdgst:-false}, 00:27:38.286 "ddgst": ${ddgst:-false} 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 } 00:27:38.286 EOF 00:27:38.286 )") 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.286 { 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme$subsystem", 00:27:38.286 "trtype": "$TEST_TRANSPORT", 00:27:38.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "$NVMF_PORT", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.286 "hdgst": ${hdgst:-false}, 00:27:38.286 "ddgst": ${ddgst:-false} 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 } 00:27:38.286 EOF 00:27:38.286 )") 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.286 { 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme$subsystem", 00:27:38.286 "trtype": "$TEST_TRANSPORT", 00:27:38.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "$NVMF_PORT", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.286 "hdgst": ${hdgst:-false}, 00:27:38.286 "ddgst": ${ddgst:-false} 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 } 00:27:38.286 EOF 00:27:38.286 )") 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.286 { 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme$subsystem", 00:27:38.286 "trtype": "$TEST_TRANSPORT", 00:27:38.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "$NVMF_PORT", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.286 "hdgst": ${hdgst:-false}, 00:27:38.286 "ddgst": ${ddgst:-false} 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 } 00:27:38.286 EOF 00:27:38.286 )") 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:38.286 20:15:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme1", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme2", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme3", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme4", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme5", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme6", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme7", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme8", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme9", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 },{ 00:27:38.286 "params": { 00:27:38.286 "name": "Nvme10", 00:27:38.286 "trtype": "tcp", 00:27:38.286 "traddr": "10.0.0.2", 00:27:38.286 "adrfam": "ipv4", 00:27:38.286 "trsvcid": "4420", 00:27:38.286 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:38.286 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:38.286 "hdgst": false, 00:27:38.286 "ddgst": false 00:27:38.286 }, 00:27:38.286 "method": "bdev_nvme_attach_controller" 00:27:38.286 }' 00:27:38.286 [2024-07-13 20:15:25.830276] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:38.286 [2024-07-13 20:15:25.830348] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:38.286 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.286 [2024-07-13 20:15:25.894954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.544 [2024-07-13 20:15:25.983061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3279069 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:40.441 20:15:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:41.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3279069 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3278891 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.374 { 00:27:41.374 "params": { 00:27:41.374 "name": "Nvme$subsystem", 00:27:41.374 "trtype": "$TEST_TRANSPORT", 00:27:41.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.374 "adrfam": "ipv4", 00:27:41.374 "trsvcid": "$NVMF_PORT", 00:27:41.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.374 "hdgst": ${hdgst:-false}, 00:27:41.374 "ddgst": ${ddgst:-false} 00:27:41.374 }, 00:27:41.374 "method": "bdev_nvme_attach_controller" 00:27:41.374 } 00:27:41.374 EOF 00:27:41.374 )") 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.374 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.375 { 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme$subsystem", 00:27:41.375 "trtype": "$TEST_TRANSPORT", 00:27:41.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "$NVMF_PORT", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.375 "hdgst": ${hdgst:-false}, 00:27:41.375 "ddgst": ${ddgst:-false} 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 } 00:27:41.375 EOF 00:27:41.375 )") 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.375 { 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme$subsystem", 00:27:41.375 "trtype": "$TEST_TRANSPORT", 00:27:41.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "$NVMF_PORT", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.375 "hdgst": ${hdgst:-false}, 00:27:41.375 "ddgst": ${ddgst:-false} 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 } 00:27:41.375 EOF 00:27:41.375 )") 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:41.375 { 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme$subsystem", 00:27:41.375 "trtype": "$TEST_TRANSPORT", 00:27:41.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "$NVMF_PORT", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.375 "hdgst": ${hdgst:-false}, 00:27:41.375 "ddgst": ${ddgst:-false} 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 } 00:27:41.375 EOF 00:27:41.375 )") 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:41.375 20:15:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme1", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme2", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme3", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme4", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme5", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme6", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme7", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme8", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme9", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 },{ 00:27:41.375 "params": { 00:27:41.375 "name": "Nvme10", 00:27:41.375 "trtype": "tcp", 00:27:41.375 "traddr": "10.0.0.2", 00:27:41.375 "adrfam": "ipv4", 00:27:41.375 "trsvcid": "4420", 00:27:41.375 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:41.375 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:41.375 "hdgst": false, 00:27:41.375 "ddgst": false 00:27:41.375 }, 00:27:41.375 "method": "bdev_nvme_attach_controller" 00:27:41.375 }' 00:27:41.375 [2024-07-13 20:15:28.866324] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:41.375 [2024-07-13 20:15:28.866405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279488 ] 00:27:41.375 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.375 [2024-07-13 20:15:28.931799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.375 [2024-07-13 20:15:29.023448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.338 Running I/O for 1 seconds... 00:27:44.273 00:27:44.273 Latency(us) 00:27:44.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.273 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme1n1 : 1.17 218.72 13.67 0.00 0.00 289904.45 22427.88 239230.67 00:27:44.273 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme2n1 : 1.19 214.99 13.44 0.00 0.00 290299.45 21651.15 251658.24 00:27:44.273 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme3n1 : 1.18 270.96 16.93 0.00 0.00 226411.56 18932.62 250104.79 00:27:44.273 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme4n1 : 1.09 235.25 14.70 0.00 0.00 255381.62 16699.54 256318.58 00:27:44.273 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme5n1 : 1.14 224.92 14.06 0.00 0.00 263360.28 21942.42 256318.58 00:27:44.273 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme6n1 : 1.17 222.62 13.91 0.00 0.00 260416.09 7912.87 271853.04 00:27:44.273 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme7n1 : 1.13 226.34 14.15 0.00 0.00 252699.31 19029.71 254765.13 00:27:44.273 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme8n1 : 1.19 268.54 16.78 0.00 0.00 210181.39 17670.45 257872.02 00:27:44.273 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme9n1 : 1.20 265.85 16.62 0.00 0.00 209541.04 17961.72 259425.47 00:27:44.273 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:44.273 Verification LBA range: start 0x0 length 0x400 00:27:44.273 Nvme10n1 : 1.20 217.74 13.61 0.00 0.00 251246.14 2475.80 282727.16 00:27:44.273 =================================================================================================================== 00:27:44.273 Total : 2365.91 147.87 0.00 0.00 248489.45 2475.80 282727.16 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:44.531 20:15:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:44.531 rmmod nvme_tcp 00:27:44.531 rmmod nvme_fabrics 00:27:44.531 rmmod nvme_keyring 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3278891 ']' 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3278891 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 3278891 ']' 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 3278891 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3278891 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3278891' 00:27:44.531 killing process with pid 3278891 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 3278891 00:27:44.531 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 3278891 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:45.098 20:15:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:47.003 00:27:47.003 real 0m11.726s 00:27:47.003 user 0m34.155s 00:27:47.003 sys 0m3.221s 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:47.003 ************************************ 00:27:47.003 END TEST nvmf_shutdown_tc1 00:27:47.003 ************************************ 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:47.003 ************************************ 00:27:47.003 START TEST nvmf_shutdown_tc2 00:27:47.003 ************************************ 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:47.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:47.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:47.003 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:47.003 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:47.003 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.004 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:47.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:27:47.262 00:27:47.262 --- 10.0.0.2 ping statistics --- 00:27:47.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.262 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:27:47.262 00:27:47.262 --- 10.0.0.1 ping statistics --- 00:27:47.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.262 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3280256 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3280256 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3280256 ']' 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:47.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:47.262 20:15:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.262 [2024-07-13 20:15:34.829026] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:47.262 [2024-07-13 20:15:34.829100] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:47.262 EAL: No free 2048 kB hugepages reported on node 1 00:27:47.262 [2024-07-13 20:15:34.898617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:47.521 [2024-07-13 20:15:34.990432] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.521 [2024-07-13 20:15:34.990494] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.521 [2024-07-13 20:15:34.990511] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.521 [2024-07-13 20:15:34.990524] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.521 [2024-07-13 20:15:34.990536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.521 [2024-07-13 20:15:34.990621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:47.521 [2024-07-13 20:15:34.990740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:47.521 [2024-07-13 20:15:34.990975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:47.521 [2024-07-13 20:15:34.990980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.521 [2024-07-13 20:15:35.137551] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.521 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.779 Malloc1 00:27:47.779 [2024-07-13 20:15:35.219100] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.779 Malloc2 00:27:47.779 Malloc3 00:27:47.779 Malloc4 00:27:47.779 Malloc5 00:27:48.037 Malloc6 00:27:48.037 Malloc7 00:27:48.037 Malloc8 00:27:48.037 Malloc9 00:27:48.037 Malloc10 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3280438 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3280438 /var/tmp/bdevperf.sock 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3280438 ']' 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:48.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.037 { 00:27:48.037 "params": { 00:27:48.037 "name": "Nvme$subsystem", 00:27:48.037 "trtype": "$TEST_TRANSPORT", 00:27:48.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.037 "adrfam": "ipv4", 00:27:48.037 "trsvcid": "$NVMF_PORT", 00:27:48.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.037 "hdgst": ${hdgst:-false}, 00:27:48.037 "ddgst": ${ddgst:-false} 00:27:48.037 }, 00:27:48.037 "method": "bdev_nvme_attach_controller" 00:27:48.037 } 00:27:48.037 EOF 00:27:48.037 )") 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.037 { 00:27:48.037 "params": { 00:27:48.037 "name": "Nvme$subsystem", 00:27:48.037 "trtype": "$TEST_TRANSPORT", 00:27:48.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.037 "adrfam": "ipv4", 00:27:48.037 "trsvcid": "$NVMF_PORT", 00:27:48.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.037 "hdgst": ${hdgst:-false}, 00:27:48.037 "ddgst": ${ddgst:-false} 00:27:48.037 }, 00:27:48.037 "method": "bdev_nvme_attach_controller" 00:27:48.037 } 00:27:48.037 EOF 00:27:48.037 )") 00:27:48.037 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.296 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.296 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.296 { 00:27:48.296 "params": { 00:27:48.296 "name": "Nvme$subsystem", 00:27:48.296 "trtype": "$TEST_TRANSPORT", 00:27:48.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.296 "adrfam": "ipv4", 00:27:48.296 "trsvcid": "$NVMF_PORT", 00:27:48.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.296 "hdgst": ${hdgst:-false}, 00:27:48.296 "ddgst": ${ddgst:-false} 00:27:48.296 }, 00:27:48.296 "method": "bdev_nvme_attach_controller" 00:27:48.296 } 00:27:48.296 EOF 00:27:48.296 )") 00:27:48.296 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.296 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.296 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.296 { 00:27:48.296 "params": { 00:27:48.296 "name": "Nvme$subsystem", 00:27:48.296 "trtype": "$TEST_TRANSPORT", 00:27:48.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.296 "adrfam": "ipv4", 00:27:48.296 "trsvcid": "$NVMF_PORT", 00:27:48.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.297 { 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme$subsystem", 00:27:48.297 "trtype": "$TEST_TRANSPORT", 00:27:48.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "$NVMF_PORT", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.297 { 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme$subsystem", 00:27:48.297 "trtype": "$TEST_TRANSPORT", 00:27:48.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "$NVMF_PORT", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.297 { 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme$subsystem", 00:27:48.297 "trtype": "$TEST_TRANSPORT", 00:27:48.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "$NVMF_PORT", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.297 { 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme$subsystem", 00:27:48.297 "trtype": "$TEST_TRANSPORT", 00:27:48.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "$NVMF_PORT", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.297 { 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme$subsystem", 00:27:48.297 "trtype": "$TEST_TRANSPORT", 00:27:48.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "$NVMF_PORT", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:48.297 { 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme$subsystem", 00:27:48.297 "trtype": "$TEST_TRANSPORT", 00:27:48.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "$NVMF_PORT", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.297 "hdgst": ${hdgst:-false}, 00:27:48.297 "ddgst": ${ddgst:-false} 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 } 00:27:48.297 EOF 00:27:48.297 )") 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:48.297 20:15:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme1", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme2", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme3", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme4", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme5", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme6", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme7", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme8", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.297 "method": "bdev_nvme_attach_controller" 00:27:48.297 },{ 00:27:48.297 "params": { 00:27:48.297 "name": "Nvme9", 00:27:48.297 "trtype": "tcp", 00:27:48.297 "traddr": "10.0.0.2", 00:27:48.297 "adrfam": "ipv4", 00:27:48.297 "trsvcid": "4420", 00:27:48.297 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:48.297 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:48.297 "hdgst": false, 00:27:48.297 "ddgst": false 00:27:48.297 }, 00:27:48.298 "method": "bdev_nvme_attach_controller" 00:27:48.298 },{ 00:27:48.298 "params": { 00:27:48.298 "name": "Nvme10", 00:27:48.298 "trtype": "tcp", 00:27:48.298 "traddr": "10.0.0.2", 00:27:48.298 "adrfam": "ipv4", 00:27:48.298 "trsvcid": "4420", 00:27:48.298 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:48.298 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:48.298 "hdgst": false, 00:27:48.298 "ddgst": false 00:27:48.298 }, 00:27:48.298 "method": "bdev_nvme_attach_controller" 00:27:48.298 }' 00:27:48.298 [2024-07-13 20:15:35.728547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:48.298 [2024-07-13 20:15:35.728623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280438 ] 00:27:48.298 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.298 [2024-07-13 20:15:35.791910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.298 [2024-07-13 20:15:35.878487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.197 Running I/O for 10 seconds... 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:50.197 20:15:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:50.455 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3280438 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3280438 ']' 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3280438 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:50.713 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3280438 00:27:50.971 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:50.971 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:50.971 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3280438' 00:27:50.971 killing process with pid 3280438 00:27:50.971 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3280438 00:27:50.971 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3280438 00:27:50.971 Received shutdown signal, test time was about 0.934711 seconds 00:27:50.971 00:27:50.971 Latency(us) 00:27:50.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.971 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme1n1 : 0.91 225.47 14.09 0.00 0.00 274567.90 11359.57 262532.36 00:27:50.971 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme2n1 : 0.89 215.15 13.45 0.00 0.00 287783.44 41943.04 223696.21 00:27:50.971 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme3n1 : 0.93 274.12 17.13 0.00 0.00 221540.50 18058.81 265639.25 00:27:50.971 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme4n1 : 0.93 276.27 17.27 0.00 0.00 215147.71 17864.63 256318.58 00:27:50.971 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme5n1 : 0.92 214.71 13.42 0.00 0.00 269510.73 5048.70 279620.27 00:27:50.971 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme6n1 : 0.92 208.81 13.05 0.00 0.00 272510.93 32234.00 260978.92 00:27:50.971 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme7n1 : 0.90 213.61 13.35 0.00 0.00 259136.73 39224.51 239230.67 00:27:50.971 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme8n1 : 0.90 212.61 13.29 0.00 0.00 255243.25 22719.15 260978.92 00:27:50.971 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme9n1 : 0.91 211.46 13.22 0.00 0.00 250946.24 20388.98 260978.92 00:27:50.971 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:50.971 Verification LBA range: start 0x0 length 0x400 00:27:50.971 Nvme10n1 : 0.93 206.47 12.90 0.00 0.00 252341.67 23204.60 301368.51 00:27:50.971 =================================================================================================================== 00:27:50.971 Total : 2258.68 141.17 0.00 0.00 253706.09 5048.70 301368.51 00:27:51.229 20:15:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3280256 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:52.159 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:52.160 rmmod nvme_tcp 00:27:52.160 rmmod nvme_fabrics 00:27:52.160 rmmod nvme_keyring 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3280256 ']' 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3280256 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 3280256 ']' 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 3280256 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3280256 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3280256' 00:27:52.160 killing process with pid 3280256 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 3280256 00:27:52.160 20:15:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 3280256 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.726 20:15:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.628 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:54.628 00:27:54.628 real 0m7.685s 00:27:54.628 user 0m23.376s 00:27:54.628 sys 0m1.444s 00:27:54.628 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:54.628 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.628 ************************************ 00:27:54.628 END TEST nvmf_shutdown_tc2 00:27:54.628 ************************************ 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:54.887 ************************************ 00:27:54.887 START TEST nvmf_shutdown_tc3 00:27:54.887 ************************************ 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:54.887 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:54.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:54.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:54.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:54.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:54.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:27:54.888 00:27:54.888 --- 10.0.0.2 ping statistics --- 00:27:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.888 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:27:54.888 00:27:54.888 --- 10.0.0.1 ping statistics --- 00:27:54.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.888 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3281349 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3281349 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3281349 ']' 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:54.888 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.147 [2024-07-13 20:15:42.566159] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:55.147 [2024-07-13 20:15:42.566257] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:55.147 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.147 [2024-07-13 20:15:42.633448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:55.147 [2024-07-13 20:15:42.722677] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:55.147 [2024-07-13 20:15:42.722738] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:55.147 [2024-07-13 20:15:42.722766] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:55.147 [2024-07-13 20:15:42.722784] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:55.147 [2024-07-13 20:15:42.722794] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:55.147 [2024-07-13 20:15:42.722846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:55.147 [2024-07-13 20:15:42.722910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:55.147 [2024-07-13 20:15:42.722980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:55.147 [2024-07-13 20:15:42.726895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.403 [2024-07-13 20:15:42.887671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.403 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.404 20:15:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.404 Malloc1 00:27:55.404 [2024-07-13 20:15:42.977138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.404 Malloc2 00:27:55.404 Malloc3 00:27:55.661 Malloc4 00:27:55.661 Malloc5 00:27:55.661 Malloc6 00:27:55.661 Malloc7 00:27:55.661 Malloc8 00:27:55.919 Malloc9 00:27:55.919 Malloc10 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3281489 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3281489 /var/tmp/bdevperf.sock 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 3281489 ']' 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.919 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:55.920 { 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme$subsystem", 00:27:55.920 "trtype": "$TEST_TRANSPORT", 00:27:55.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "$NVMF_PORT", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.920 "hdgst": ${hdgst:-false}, 00:27:55.920 "ddgst": ${ddgst:-false} 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 } 00:27:55.920 EOF 00:27:55.920 )") 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:55.920 20:15:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme1", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme2", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme3", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme4", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme5", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme6", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme7", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme8", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme9", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 },{ 00:27:55.920 "params": { 00:27:55.920 "name": "Nvme10", 00:27:55.920 "trtype": "tcp", 00:27:55.920 "traddr": "10.0.0.2", 00:27:55.920 "adrfam": "ipv4", 00:27:55.920 "trsvcid": "4420", 00:27:55.920 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:55.920 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:55.920 "hdgst": false, 00:27:55.920 "ddgst": false 00:27:55.920 }, 00:27:55.920 "method": "bdev_nvme_attach_controller" 00:27:55.920 }' 00:27:55.920 [2024-07-13 20:15:43.481518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:27:55.920 [2024-07-13 20:15:43.481595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281489 ] 00:27:55.920 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.920 [2024-07-13 20:15:43.546119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.178 [2024-07-13 20:15:43.633896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.570 Running I/O for 10 seconds... 00:27:57.829 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:57.829 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:27:57.829 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:57.829 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.829 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:58.118 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:58.377 20:15:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3281349 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 3281349 ']' 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 3281349 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3281349 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3281349' 00:27:58.650 killing process with pid 3281349 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 3281349 00:27:58.650 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 3281349 00:27:58.650 [2024-07-13 20:15:46.132574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dca90 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133816] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133839] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.650 [2024-07-13 20:15:46.133941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.133953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.133964] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.133976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.133987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.133999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134129] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134222] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.134563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e88c0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.136319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136459] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe997d0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.136532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.651 [2024-07-13 20:15:46.136635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5df0 is same with the state(5) to be set 00:27:58.651 [2024-07-13 20:15:46.136742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.136972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.137001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.137016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.651 [2024-07-13 20:15:46.137031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.651 [2024-07-13 20:15:46.137045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.137971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.137985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.652 [2024-07-13 20:15:46.138316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.652 [2024-07-13 20:15:46.138332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.138710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.138791] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8c8ff0 was disconnected and freed. reset controller. 00:27:58.653 [2024-07-13 20:15:46.140433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.140589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:58.653 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128[2024-07-13 20:15:46.140611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 he state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.140627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:58.653 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 20:15:46.140692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 he state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.140722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:58.653 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:12[2024-07-13 20:15:46.140802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 he state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140817] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.140817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:58.653 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 20:15:46.140923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 he state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.653 [2024-07-13 20:15:46.140970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.653 [2024-07-13 20:15:46.140979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.653 [2024-07-13 20:15:46.140982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.140993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.140995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 20:15:46.141086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 he state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:12[2024-07-13 20:15:46.141138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 he state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.141153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:58.654 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.141232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:12he state(5) to be set 00:27:58.654 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with t[2024-07-13 20:15:46.141247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:58.654 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 20:15:46.141310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 he state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 20:15:46.141376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 he state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 [2024-07-13 20:15:46.141414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.654 [2024-07-13 20:15:46.141427] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with the state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 20:15:46.141439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd3d0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.654 he state(5) to be set 00:27:58.654 [2024-07-13 20:15:46.141455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.141982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.141997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.655 [2024-07-13 20:15:46.142463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.655 [2024-07-13 20:15:46.142540] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8ca3e0 was disconnected and freed. reset controller. 00:27:58.655 [2024-07-13 20:15:46.142902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.142936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.142951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.142963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.142975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.142987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.142999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with t[2024-07-13 20:15:46.143028] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controllehe state(5) to be set 00:27:58.655 r 00:27:58.655 [2024-07-13 20:15:46.143050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c5df0 (9): B[2024-07-13 20:15:46.143075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with tad file descriptor 00:27:58.655 he state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.655 [2024-07-13 20:15:46.143146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143269] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143476] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143645] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.143716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9dd890 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.144989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145564] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145588] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.656 [2024-07-13 20:15:46.145704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.145951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ddd30 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.146149] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:58.657 [2024-07-13 20:15:46.146222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce7d10 (9): Bad file descriptor 00:27:58.657 [2024-07-13 20:15:46.146932] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:58.657 [2024-07-13 20:15:46.147327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.657 [2024-07-13 20:15:46.147366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c5df0 with addr=10.0.0.2, port=4420 00:27:58.657 [2024-07-13 20:15:46.147384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5df0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.147443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe997d0 (9): Bad file descriptor 00:27:58.657 [2024-07-13 20:15:46.147524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147651] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05a60 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.147696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd056b0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.147892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.147985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.657 [2024-07-13 20:15:46.147999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.657 [2024-07-13 20:15:46.148012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe808c0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148570] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148608] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148776] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148846] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe7dee0 was disconnected and fre[2024-07-13 20:15:46.148860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with ted. reset controller. 00:27:58.657 he state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148923] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.148981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.657 [2024-07-13 20:15:46.149223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149326] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149435] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.658 [2024-07-13 20:15:46.149460] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce7d10 with addr=10.0.0.2, port=4420 00:27:58.658 [2024-07-13 20:15:46.149484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7d10 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c5df0 (9): Bad file descriptor 00:27:58.658 [2024-07-13 20:15:46.149521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7ac0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.149620] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:58.658 [2024-07-13 20:15:46.149972] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:58.658 [2024-07-13 20:15:46.150031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd242b0 (9): Bad file descriptor 00:27:58.658 [2024-07-13 20:15:46.150057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce7d10 (9): Bad file descriptor 00:27:58.658 [2024-07-13 20:15:46.150075] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.658 [2024-07-13 20:15:46.150094] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.658 [2024-07-13 20:15:46.150110] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.658 [2024-07-13 20:15:46.150199] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:58.658 [2024-07-13 20:15:46.150464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.658 [2024-07-13 20:15:46.150499] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:58.658 [2024-07-13 20:15:46.150515] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:58.658 [2024-07-13 20:15:46.150529] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:58.658 [2024-07-13 20:15:46.150706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150792] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150807] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.658 [2024-07-13 20:15:46.150815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150830] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150842] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150889] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.150988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.658 [2024-07-13 20:15:46.151025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd242b0 with addr=10.0.0.2, port=4420 00:27:58.658 [2024-07-13 20:15:46.151050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151056] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd242b0 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151076] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151131] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:58.658 [2024-07-13 20:15:46.151136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.658 [2024-07-13 20:15:46.151342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd242b0 (9): Bad file descriptor 00:27:58.659 [2024-07-13 20:15:46.151351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151423] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e7f60 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.151558] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:58.659 [2024-07-13 20:15:46.151579] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:58.659 [2024-07-13 20:15:46.151593] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:58.659 [2024-07-13 20:15:46.151666] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:58.659 [2024-07-13 20:15:46.151796] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.659 [2024-07-13 20:15:46.152262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152305] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with t[2024-07-13 20:15:46.152395] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:58.659 he state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152504] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.659 [2024-07-13 20:15:46.152542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152624] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152659] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152916] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152939] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152962] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.152990] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153037] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e8400 is same with the state(5) to be set 00:27:58.660 [2024-07-13 20:15:46.153724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.153974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.153989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.660 [2024-07-13 20:15:46.154295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.660 [2024-07-13 20:15:46.154309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.154980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.154994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.661 [2024-07-13 20:15:46.155505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.661 [2024-07-13 20:15:46.155521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.155786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.155800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcda170 is same with the state(5) to be set 00:27:58.662 [2024-07-13 20:15:46.155897] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcda170 was disconnected and freed. reset controller. 00:27:58.662 [2024-07-13 20:15:46.157108] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:58.662 [2024-07-13 20:15:46.157176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a310 (9): Bad file descriptor 00:27:58.662 [2024-07-13 20:15:46.157240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98f30 is same with the state(5) to be set 00:27:58.662 [2024-07-13 20:15:46.157383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05a60 (9): Bad file descriptor 00:27:58.662 [2024-07-13 20:15:46.157411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd056b0 (9): Bad file descriptor 00:27:58.662 [2024-07-13 20:15:46.157457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:58.662 [2024-07-13 20:15:46.157568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.157586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9a720 is same with the state(5) to be set 00:27:58.662 [2024-07-13 20:15:46.157612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe808c0 (9): Bad file descriptor 00:27:58.662 [2024-07-13 20:15:46.158070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.662 [2024-07-13 20:15:46.158743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.662 [2024-07-13 20:15:46.158760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.158977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.158993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.159948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.159968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.169361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.169420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.169436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.169452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.169470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.169485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.169501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.663 [2024-07-13 20:15:46.169517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.663 [2024-07-13 20:15:46.169533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb2560 is same with the state(5) to be set 00:27:58.663 [2024-07-13 20:15:46.171647] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.663 [2024-07-13 20:15:46.171703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:58.664 [2024-07-13 20:15:46.171979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.664 [2024-07-13 20:15:46.172012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9a310 with addr=10.0.0.2, port=4420 00:27:58.664 [2024-07-13 20:15:46.172030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9a310 is same with the state(5) to be set 00:27:58.664 [2024-07-13 20:15:46.172097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98f30 (9): Bad file descriptor 00:27:58.664 [2024-07-13 20:15:46.172148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a720 (9): Bad file descriptor 00:27:58.664 [2024-07-13 20:15:46.172188] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.664 [2024-07-13 20:15:46.172215] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a310 (9): Bad file descriptor 00:27:58.664 [2024-07-13 20:15:46.172343] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:58.664 [2024-07-13 20:15:46.172530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.664 [2024-07-13 20:15:46.172560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c5df0 with addr=10.0.0.2, port=4420 00:27:58.664 [2024-07-13 20:15:46.172577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5df0 is same with the state(5) to be set 00:27:58.664 [2024-07-13 20:15:46.172790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.664 [2024-07-13 20:15:46.172815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe997d0 with addr=10.0.0.2, port=4420 00:27:58.664 [2024-07-13 20:15:46.172831] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe997d0 is same with the state(5) to be set 00:27:58.664 [2024-07-13 20:15:46.172928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.172961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.172984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.173971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.173985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.174001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.174015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.664 [2024-07-13 20:15:46.174031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.664 [2024-07-13 20:15:46.174045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.174928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.174943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccfed0 is same with the state(5) to be set 00:27:58.665 [2024-07-13 20:15:46.176232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.665 [2024-07-13 20:15:46.176445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.665 [2024-07-13 20:15:46.176460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.176976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.176990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.666 [2024-07-13 20:15:46.177811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.666 [2024-07-13 20:15:46.177827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.177841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.177857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.177878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.177895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.177912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.177928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.177942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.177958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.177972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.177988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.178259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.178274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7b520 is same with the state(5) to be set 00:27:58.667 [2024-07-13 20:15:46.179518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.179979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.179996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.667 [2024-07-13 20:15:46.180411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.667 [2024-07-13 20:15:46.180427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.180969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.180983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.668 [2024-07-13 20:15:46.181546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.668 [2024-07-13 20:15:46.181560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7ca00 is same with the state(5) to be set 00:27:58.668 [2024-07-13 20:15:46.183119] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:58.668 [2024-07-13 20:15:46.183151] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:58.668 [2024-07-13 20:15:46.183170] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:58.668 [2024-07-13 20:15:46.183187] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:58.668 [2024-07-13 20:15:46.183463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.668 [2024-07-13 20:15:46.183491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce7d10 with addr=10.0.0.2, port=4420 00:27:58.668 [2024-07-13 20:15:46.183508] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7d10 is same with the state(5) to be set 00:27:58.668 [2024-07-13 20:15:46.183535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c5df0 (9): Bad file descriptor 00:27:58.668 [2024-07-13 20:15:46.183554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe997d0 (9): Bad file descriptor 00:27:58.668 [2024-07-13 20:15:46.183570] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:58.668 [2024-07-13 20:15:46.183583] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:58.668 [2024-07-13 20:15:46.183599] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:58.668 [2024-07-13 20:15:46.183685] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.668 [2024-07-13 20:15:46.183710] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.668 [2024-07-13 20:15:46.183730] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.668 [2024-07-13 20:15:46.183750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce7d10 (9): Bad file descriptor 00:27:58.669 [2024-07-13 20:15:46.183845] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.669 [2024-07-13 20:15:46.184025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.669 [2024-07-13 20:15:46.184056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd242b0 with addr=10.0.0.2, port=4420 00:27:58.669 [2024-07-13 20:15:46.184072] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd242b0 is same with the state(5) to be set 00:27:58.669 [2024-07-13 20:15:46.184213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.669 [2024-07-13 20:15:46.184238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd05a60 with addr=10.0.0.2, port=4420 00:27:58.669 [2024-07-13 20:15:46.184253] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd05a60 is same with the state(5) to be set 00:27:58.669 [2024-07-13 20:15:46.184395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.669 [2024-07-13 20:15:46.184419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe808c0 with addr=10.0.0.2, port=4420 00:27:58.669 [2024-07-13 20:15:46.184434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe808c0 is same with the state(5) to be set 00:27:58.669 [2024-07-13 20:15:46.184573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.669 [2024-07-13 20:15:46.184597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd056b0 with addr=10.0.0.2, port=4420 00:27:58.669 [2024-07-13 20:15:46.184612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd056b0 is same with the state(5) to be set 00:27:58.669 [2024-07-13 20:15:46.184628] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.669 [2024-07-13 20:15:46.184641] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.669 [2024-07-13 20:15:46.184654] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.669 [2024-07-13 20:15:46.184673] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:58.669 [2024-07-13 20:15:46.184687] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:58.669 [2024-07-13 20:15:46.184699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:58.669 [2024-07-13 20:15:46.185523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.185973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.185989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.669 [2024-07-13 20:15:46.186392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.669 [2024-07-13 20:15:46.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.186975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.186989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.187547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.187561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe7f3c0 is same with the state(5) to be set 00:27:58.670 [2024-07-13 20:15:46.188825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.188849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.188875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.188893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.188909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.188924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.188939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.188958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.188975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.188990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.189006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.670 [2024-07-13 20:15:46.189020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.670 [2024-07-13 20:15:46.189037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.189978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.189992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.671 [2024-07-13 20:15:46.190342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.671 [2024-07-13 20:15:46.190356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.672 [2024-07-13 20:15:46.190809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:58.672 [2024-07-13 20:15:46.190824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd0d60 is same with the state(5) to be set 00:27:58.672 [2024-07-13 20:15:46.192492] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.672 [2024-07-13 20:15:46.192518] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.672 [2024-07-13 20:15:46.192537] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:58.672 task offset: 24576 on job bdev=Nvme1n1 fails 00:27:58.672 00:27:58.672 Latency(us) 00:27:58.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.672 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme1n1 ended in about 0.97 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme1n1 : 0.97 198.85 12.43 66.28 0.00 238836.10 4150.61 254765.13 00:27:58.672 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme2n1 ended in about 0.97 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme2n1 : 0.97 198.06 12.38 66.02 0.00 235197.01 4538.97 251658.24 00:27:58.672 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme3n1 ended in about 1.00 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme3n1 : 1.00 196.77 12.30 63.93 0.00 233979.98 19612.25 246997.90 00:27:58.672 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme4n1 ended in about 1.00 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme4n1 : 1.00 191.15 11.95 63.72 0.00 234714.64 19612.25 250104.79 00:27:58.672 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme5n1 ended in about 1.01 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme5n1 : 1.01 127.01 7.94 63.51 0.00 308032.09 22816.24 264085.81 00:27:58.672 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme6n1 : 0.97 197.73 12.36 0.00 0.00 289667.86 21068.61 257872.02 00:27:58.672 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme7n1 ended in about 1.01 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme7n1 : 1.01 189.39 11.84 63.13 0.00 223388.63 22427.88 253211.69 00:27:58.672 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme8n1 ended in about 1.02 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme8n1 : 1.02 188.79 11.80 62.93 0.00 219742.06 20583.16 251658.24 00:27:58.672 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme9n1 ended in about 0.98 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme9n1 : 0.98 130.32 8.14 65.16 0.00 275604.67 23398.78 299815.06 00:27:58.672 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:58.672 Job: Nvme10n1 ended in about 1.00 seconds with error 00:27:58.672 Verification LBA range: start 0x0 length 0x400 00:27:58.672 Nvme10n1 : 1.00 128.54 8.03 64.27 0.00 274272.46 22524.97 267192.70 00:27:58.672 =================================================================================================================== 00:27:58.672 Total : 1746.61 109.16 578.94 0.00 249581.83 4150.61 299815.06 00:27:58.672 [2024-07-13 20:15:46.218980] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:58.672 [2024-07-13 20:15:46.219072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:58.672 [2024-07-13 20:15:46.219160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd242b0 (9): Bad file descriptor 00:27:58.672 [2024-07-13 20:15:46.219187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd05a60 (9): Bad file descriptor 00:27:58.672 [2024-07-13 20:15:46.219206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe808c0 (9): Bad file descriptor 00:27:58.672 [2024-07-13 20:15:46.219223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd056b0 (9): Bad file descriptor 00:27:58.672 [2024-07-13 20:15:46.219240] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:58.672 [2024-07-13 20:15:46.219266] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:58.672 [2024-07-13 20:15:46.219281] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:58.672 [2024-07-13 20:15:46.219480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.672 [2024-07-13 20:15:46.219753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.672 [2024-07-13 20:15:46.219787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9a720 with addr=10.0.0.2, port=4420 00:27:58.672 [2024-07-13 20:15:46.219807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9a720 is same with the state(5) to be set 00:27:58.672 [2024-07-13 20:15:46.219973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.672 [2024-07-13 20:15:46.220000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe98f30 with addr=10.0.0.2, port=4420 00:27:58.672 [2024-07-13 20:15:46.220017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98f30 is same with the state(5) to be set 00:27:58.672 [2024-07-13 20:15:46.220032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:58.672 [2024-07-13 20:15:46.220044] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:58.672 [2024-07-13 20:15:46.220057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:58.672 [2024-07-13 20:15:46.220077] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:58.672 [2024-07-13 20:15:46.220092] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:58.672 [2024-07-13 20:15:46.220104] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:58.672 [2024-07-13 20:15:46.220123] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:58.672 [2024-07-13 20:15:46.220136] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:58.672 [2024-07-13 20:15:46.220149] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:58.672 [2024-07-13 20:15:46.220166] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:58.672 [2024-07-13 20:15:46.220178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:58.672 [2024-07-13 20:15:46.220191] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:58.672 [2024-07-13 20:15:46.220243] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.672 [2024-07-13 20:15:46.220266] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.672 [2024-07-13 20:15:46.220283] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.672 [2024-07-13 20:15:46.220301] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:58.673 [2024-07-13 20:15:46.220925] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.220950] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.220963] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.220974] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.221002] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a720 (9): Bad file descriptor 00:27:58.673 [2024-07-13 20:15:46.221023] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98f30 (9): Bad file descriptor 00:27:58.673 [2024-07-13 20:15:46.221095] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:58.673 [2024-07-13 20:15:46.221120] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:58.673 [2024-07-13 20:15:46.221136] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:58.673 [2024-07-13 20:15:46.221170] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:58.673 [2024-07-13 20:15:46.221186] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:58.673 [2024-07-13 20:15:46.221200] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:58.673 [2024-07-13 20:15:46.221216] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:58.673 [2024-07-13 20:15:46.221229] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:58.673 [2024-07-13 20:15:46.221241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:58.673 [2024-07-13 20:15:46.221283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:58.673 [2024-07-13 20:15:46.221314] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.221331] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.221493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.673 [2024-07-13 20:15:46.221519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe9a310 with addr=10.0.0.2, port=4420 00:27:58.673 [2024-07-13 20:15:46.221535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe9a310 is same with the state(5) to be set 00:27:58.673 [2024-07-13 20:15:46.221680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.673 [2024-07-13 20:15:46.221706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe997d0 with addr=10.0.0.2, port=4420 00:27:58.673 [2024-07-13 20:15:46.221722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe997d0 is same with the state(5) to be set 00:27:58.673 [2024-07-13 20:15:46.221860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.673 [2024-07-13 20:15:46.221910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c5df0 with addr=10.0.0.2, port=4420 00:27:58.673 [2024-07-13 20:15:46.221926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c5df0 is same with the state(5) to be set 00:27:58.673 [2024-07-13 20:15:46.222101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.673 [2024-07-13 20:15:46.222126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce7d10 with addr=10.0.0.2, port=4420 00:27:58.673 [2024-07-13 20:15:46.222142] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce7d10 is same with the state(5) to be set 00:27:58.673 [2024-07-13 20:15:46.222160] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe9a310 (9): Bad file descriptor 00:27:58.673 [2024-07-13 20:15:46.222178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe997d0 (9): Bad file descriptor 00:27:58.673 [2024-07-13 20:15:46.222195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c5df0 (9): Bad file descriptor 00:27:58.673 [2024-07-13 20:15:46.222238] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce7d10 (9): Bad file descriptor 00:27:58.673 [2024-07-13 20:15:46.222259] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:58.673 [2024-07-13 20:15:46.222271] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:58.673 [2024-07-13 20:15:46.222289] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:58.673 [2024-07-13 20:15:46.222306] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:58.673 [2024-07-13 20:15:46.222320] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:58.673 [2024-07-13 20:15:46.222332] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:58.673 [2024-07-13 20:15:46.222347] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:58.673 [2024-07-13 20:15:46.222360] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:58.673 [2024-07-13 20:15:46.222373] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:58.673 [2024-07-13 20:15:46.222409] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.222425] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.222437] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:58.673 [2024-07-13 20:15:46.222449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:58.673 [2024-07-13 20:15:46.222461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:58.673 [2024-07-13 20:15:46.222474] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:58.673 [2024-07-13 20:15:46.222509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:59.240 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:59.240 20:15:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3281489 00:28:00.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3281489) - No such process 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:00.172 rmmod nvme_tcp 00:28:00.172 rmmod nvme_fabrics 00:28:00.172 rmmod nvme_keyring 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.172 20:15:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.702 20:15:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.702 00:28:02.702 real 0m7.462s 00:28:02.702 user 0m18.032s 00:28:02.702 sys 0m1.573s 00:28:02.702 20:15:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:02.702 20:15:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:02.702 ************************************ 00:28:02.702 END TEST nvmf_shutdown_tc3 00:28:02.702 ************************************ 00:28:02.702 20:15:49 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:02.702 00:28:02.702 real 0m27.091s 00:28:02.702 user 1m15.668s 00:28:02.702 sys 0m6.366s 00:28:02.702 20:15:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:02.702 20:15:49 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:02.703 ************************************ 00:28:02.703 END TEST nvmf_shutdown 00:28:02.703 ************************************ 00:28:02.703 20:15:49 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.703 20:15:49 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.703 20:15:49 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:02.703 20:15:49 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:02.703 20:15:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.703 ************************************ 00:28:02.703 START TEST nvmf_multicontroller 00:28:02.703 ************************************ 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:02.703 * Looking for test storage... 00:28:02.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.703 20:15:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:04.606 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.606 20:15:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:04.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:04.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:04.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:28:04.606 00:28:04.606 --- 10.0.0.2 ping statistics --- 00:28:04.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.606 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:28:04.606 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:04.606 00:28:04.606 --- 10.0.0.1 ping statistics --- 00:28:04.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.606 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3283928 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3283928 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3283928 ']' 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:04.607 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.607 [2024-07-13 20:15:52.216372] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:04.607 [2024-07-13 20:15:52.216457] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.607 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.865 [2024-07-13 20:15:52.282094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:04.865 [2024-07-13 20:15:52.369036] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.865 [2024-07-13 20:15:52.369099] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.865 [2024-07-13 20:15:52.369128] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.865 [2024-07-13 20:15:52.369141] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.865 [2024-07-13 20:15:52.369151] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.865 [2024-07-13 20:15:52.369282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.865 [2024-07-13 20:15:52.369348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.865 [2024-07-13 20:15:52.369350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.865 [2024-07-13 20:15:52.509920] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.865 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 Malloc0 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 [2024-07-13 20:15:52.569412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 [2024-07-13 20:15:52.577302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 Malloc1 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.124 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3284076 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3284076 /var/tmp/bdevperf.sock 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 3284076 ']' 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:05.125 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.381 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.381 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:28:05.381 20:15:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:05.381 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.381 20:15:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.638 NVMe0n1 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.638 1 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:05.638 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.639 request: 00:28:05.639 { 00:28:05.639 "name": "NVMe0", 00:28:05.639 "trtype": "tcp", 00:28:05.639 "traddr": "10.0.0.2", 00:28:05.639 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:05.639 "hostaddr": "10.0.0.2", 00:28:05.639 "hostsvcid": "60000", 00:28:05.639 "adrfam": "ipv4", 00:28:05.639 "trsvcid": "4420", 00:28:05.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.639 "method": "bdev_nvme_attach_controller", 00:28:05.639 "req_id": 1 00:28:05.639 } 00:28:05.639 Got JSON-RPC error response 00:28:05.639 response: 00:28:05.639 { 00:28:05.639 "code": -114, 00:28:05.639 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:05.639 } 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.639 request: 00:28:05.639 { 00:28:05.639 "name": "NVMe0", 00:28:05.639 "trtype": "tcp", 00:28:05.639 "traddr": "10.0.0.2", 00:28:05.639 "hostaddr": "10.0.0.2", 00:28:05.639 "hostsvcid": "60000", 00:28:05.639 "adrfam": "ipv4", 00:28:05.639 "trsvcid": "4420", 00:28:05.639 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:05.639 "method": "bdev_nvme_attach_controller", 00:28:05.639 "req_id": 1 00:28:05.639 } 00:28:05.639 Got JSON-RPC error response 00:28:05.639 response: 00:28:05.639 { 00:28:05.639 "code": -114, 00:28:05.639 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:05.639 } 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.639 request: 00:28:05.639 { 00:28:05.639 "name": "NVMe0", 00:28:05.639 "trtype": "tcp", 00:28:05.639 "traddr": "10.0.0.2", 00:28:05.639 "hostaddr": "10.0.0.2", 00:28:05.639 "hostsvcid": "60000", 00:28:05.639 "adrfam": "ipv4", 00:28:05.639 "trsvcid": "4420", 00:28:05.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.639 "multipath": "disable", 00:28:05.639 "method": "bdev_nvme_attach_controller", 00:28:05.639 "req_id": 1 00:28:05.639 } 00:28:05.639 Got JSON-RPC error response 00:28:05.639 response: 00:28:05.639 { 00:28:05.639 "code": -114, 00:28:05.639 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:05.639 } 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.639 request: 00:28:05.639 { 00:28:05.639 "name": "NVMe0", 00:28:05.639 "trtype": "tcp", 00:28:05.639 "traddr": "10.0.0.2", 00:28:05.639 "hostaddr": "10.0.0.2", 00:28:05.639 "hostsvcid": "60000", 00:28:05.639 "adrfam": "ipv4", 00:28:05.639 "trsvcid": "4420", 00:28:05.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:05.639 "multipath": "failover", 00:28:05.639 "method": "bdev_nvme_attach_controller", 00:28:05.639 "req_id": 1 00:28:05.639 } 00:28:05.639 Got JSON-RPC error response 00:28:05.639 response: 00:28:05.639 { 00:28:05.639 "code": -114, 00:28:05.639 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:05.639 } 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.639 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.639 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.898 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:05.898 20:15:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:07.270 0 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3284076 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3284076 ']' 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3284076 00:28:07.270 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3284076 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3284076' 00:28:07.271 killing process with pid 3284076 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3284076 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3284076 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:28:07.271 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:07.271 [2024-07-13 20:15:52.679256] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:07.271 [2024-07-13 20:15:52.679339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284076 ] 00:28:07.271 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.271 [2024-07-13 20:15:52.738193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.271 [2024-07-13 20:15:52.824566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.271 [2024-07-13 20:15:53.428832] bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 74bfb12a-e50a-4eb8-bc5d-f7039cb55bc6 already exists 00:28:07.271 [2024-07-13 20:15:53.428897] bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:74bfb12a-e50a-4eb8-bc5d-f7039cb55bc6 alias for bdev NVMe1n1 00:28:07.271 [2024-07-13 20:15:53.428917] bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:07.271 Running I/O for 1 seconds... 00:28:07.271 00:28:07.271 Latency(us) 00:28:07.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.271 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:07.271 NVMe0n1 : 1.01 17079.58 66.72 0.00 0.00 7462.66 2063.17 9272.13 00:28:07.271 =================================================================================================================== 00:28:07.271 Total : 17079.58 66.72 0.00 0.00 7462.66 2063.17 9272.13 00:28:07.271 Received shutdown signal, test time was about 1.000000 seconds 00:28:07.271 00:28:07.271 Latency(us) 00:28:07.271 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.271 =================================================================================================================== 00:28:07.271 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.271 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:07.271 rmmod nvme_tcp 00:28:07.271 rmmod nvme_fabrics 00:28:07.271 rmmod nvme_keyring 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3283928 ']' 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3283928 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 3283928 ']' 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 3283928 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:07.271 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3283928 00:28:07.529 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:07.529 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:07.529 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3283928' 00:28:07.529 killing process with pid 3283928 00:28:07.529 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 3283928 00:28:07.529 20:15:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 3283928 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.787 20:15:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.688 20:15:57 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.688 00:28:09.688 real 0m7.389s 00:28:09.688 user 0m11.404s 00:28:09.688 sys 0m2.322s 00:28:09.688 20:15:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:09.688 20:15:57 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.688 ************************************ 00:28:09.688 END TEST nvmf_multicontroller 00:28:09.688 ************************************ 00:28:09.688 20:15:57 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:09.688 20:15:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:09.688 20:15:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:09.688 20:15:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.688 ************************************ 00:28:09.688 START TEST nvmf_aer 00:28:09.688 ************************************ 00:28:09.688 20:15:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:09.946 * Looking for test storage... 00:28:09.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.946 20:15:57 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.947 20:15:57 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.843 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:11.844 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:11.844 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:11.844 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:11.844 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:11.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:28:11.844 00:28:11.844 --- 10.0.0.2 ping statistics --- 00:28:11.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.844 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:28:11.844 00:28:11.844 --- 10.0.0.1 ping statistics --- 00:28:11.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.844 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3286213 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3286213 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 3286213 ']' 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:11.844 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 [2024-07-13 20:15:59.455505] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:11.844 [2024-07-13 20:15:59.455578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.844 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.102 [2024-07-13 20:15:59.527294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.102 [2024-07-13 20:15:59.620063] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.102 [2024-07-13 20:15:59.620125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.102 [2024-07-13 20:15:59.620140] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.102 [2024-07-13 20:15:59.620151] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.102 [2024-07-13 20:15:59.620176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.102 [2024-07-13 20:15:59.620240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.102 [2024-07-13 20:15:59.620281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.102 [2024-07-13 20:15:59.620369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.102 [2024-07-13 20:15:59.620372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.102 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:12.102 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:28:12.102 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:12.102 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.102 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 [2024-07-13 20:15:59.775790] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 Malloc0 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 [2024-07-13 20:15:59.829580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.360 [ 00:28:12.360 { 00:28:12.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:12.360 "subtype": "Discovery", 00:28:12.360 "listen_addresses": [], 00:28:12.360 "allow_any_host": true, 00:28:12.360 "hosts": [] 00:28:12.360 }, 00:28:12.360 { 00:28:12.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.360 "subtype": "NVMe", 00:28:12.360 "listen_addresses": [ 00:28:12.360 { 00:28:12.360 "trtype": "TCP", 00:28:12.360 "adrfam": "IPv4", 00:28:12.360 "traddr": "10.0.0.2", 00:28:12.360 "trsvcid": "4420" 00:28:12.360 } 00:28:12.360 ], 00:28:12.360 "allow_any_host": true, 00:28:12.360 "hosts": [], 00:28:12.360 "serial_number": "SPDK00000000000001", 00:28:12.360 "model_number": "SPDK bdev Controller", 00:28:12.360 "max_namespaces": 2, 00:28:12.360 "min_cntlid": 1, 00:28:12.360 "max_cntlid": 65519, 00:28:12.360 "namespaces": [ 00:28:12.360 { 00:28:12.360 "nsid": 1, 00:28:12.360 "bdev_name": "Malloc0", 00:28:12.360 "name": "Malloc0", 00:28:12.360 "nguid": "64B76697BF264C73AB9B948CC6BEA6C7", 00:28:12.360 "uuid": "64b76697-bf26-4c73-ab9b-948cc6bea6c7" 00:28:12.360 } 00:28:12.360 ] 00:28:12.360 } 00:28:12.360 ] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3286310 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:12.360 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:28:12.360 20:15:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.618 Malloc1 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.618 Asynchronous Event Request test 00:28:12.618 Attaching to 10.0.0.2 00:28:12.618 Attached to 10.0.0.2 00:28:12.618 Registering asynchronous event callbacks... 00:28:12.618 Starting namespace attribute notice tests for all controllers... 00:28:12.618 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:12.618 aer_cb - Changed Namespace 00:28:12.618 Cleaning up... 00:28:12.618 [ 00:28:12.618 { 00:28:12.618 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:12.618 "subtype": "Discovery", 00:28:12.618 "listen_addresses": [], 00:28:12.618 "allow_any_host": true, 00:28:12.618 "hosts": [] 00:28:12.618 }, 00:28:12.618 { 00:28:12.618 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:12.618 "subtype": "NVMe", 00:28:12.618 "listen_addresses": [ 00:28:12.618 { 00:28:12.618 "trtype": "TCP", 00:28:12.618 "adrfam": "IPv4", 00:28:12.618 "traddr": "10.0.0.2", 00:28:12.618 "trsvcid": "4420" 00:28:12.618 } 00:28:12.618 ], 00:28:12.618 "allow_any_host": true, 00:28:12.618 "hosts": [], 00:28:12.618 "serial_number": "SPDK00000000000001", 00:28:12.618 "model_number": "SPDK bdev Controller", 00:28:12.618 "max_namespaces": 2, 00:28:12.618 "min_cntlid": 1, 00:28:12.618 "max_cntlid": 65519, 00:28:12.618 "namespaces": [ 00:28:12.618 { 00:28:12.618 "nsid": 1, 00:28:12.618 "bdev_name": "Malloc0", 00:28:12.618 "name": "Malloc0", 00:28:12.618 "nguid": "64B76697BF264C73AB9B948CC6BEA6C7", 00:28:12.618 "uuid": "64b76697-bf26-4c73-ab9b-948cc6bea6c7" 00:28:12.618 }, 00:28:12.618 { 00:28:12.618 "nsid": 2, 00:28:12.618 "bdev_name": "Malloc1", 00:28:12.618 "name": "Malloc1", 00:28:12.618 "nguid": "79D287E79FDE4CBEB4FBC0FF744BEFA6", 00:28:12.618 "uuid": "79d287e7-9fde-4cbe-b4fb-c0ff744befa6" 00:28:12.618 } 00:28:12.618 ] 00:28:12.618 } 00:28:12.618 ] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3286310 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:12.618 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.619 rmmod nvme_tcp 00:28:12.619 rmmod nvme_fabrics 00:28:12.619 rmmod nvme_keyring 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3286213 ']' 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3286213 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 3286213 ']' 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 3286213 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:12.619 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3286213 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3286213' 00:28:12.878 killing process with pid 3286213 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 3286213 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 3286213 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.878 20:16:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.413 20:16:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.413 00:28:15.413 real 0m5.238s 00:28:15.413 user 0m4.130s 00:28:15.413 sys 0m1.861s 00:28:15.413 20:16:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:15.413 20:16:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:15.413 ************************************ 00:28:15.413 END TEST nvmf_aer 00:28:15.413 ************************************ 00:28:15.413 20:16:02 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:15.413 20:16:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:15.413 20:16:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:15.413 20:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:15.413 ************************************ 00:28:15.413 START TEST nvmf_async_init 00:28:15.413 ************************************ 00:28:15.413 20:16:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:15.413 * Looking for test storage... 00:28:15.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:15.413 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.413 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:15.413 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.413 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b1c9ecd2809f4eea959cfecd47aeea15 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.414 20:16:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:17.335 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:28:17.336 00:28:17.336 --- 10.0.0.2 ping statistics --- 00:28:17.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.336 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:28:17.336 00:28:17.336 --- 10.0.0.1 ping statistics --- 00:28:17.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.336 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3288261 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3288261 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 3288261 ']' 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:17.336 20:16:04 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.336 [2024-07-13 20:16:04.788476] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:17.336 [2024-07-13 20:16:04.788559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.336 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.336 [2024-07-13 20:16:04.855176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.336 [2024-07-13 20:16:04.945591] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.336 [2024-07-13 20:16:04.945655] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.336 [2024-07-13 20:16:04.945671] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.336 [2024-07-13 20:16:04.945685] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.336 [2024-07-13 20:16:04.945698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.336 [2024-07-13 20:16:04.945729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.593 [2024-07-13 20:16:05.091314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.593 null0 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.593 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b1c9ecd2809f4eea959cfecd47aeea15 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.594 [2024-07-13 20:16:05.131577] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.594 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.850 nvme0n1 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.851 [ 00:28:17.851 { 00:28:17.851 "name": "nvme0n1", 00:28:17.851 "aliases": [ 00:28:17.851 "b1c9ecd2-809f-4eea-959c-fecd47aeea15" 00:28:17.851 ], 00:28:17.851 "product_name": "NVMe disk", 00:28:17.851 "block_size": 512, 00:28:17.851 "num_blocks": 2097152, 00:28:17.851 "uuid": "b1c9ecd2-809f-4eea-959c-fecd47aeea15", 00:28:17.851 "assigned_rate_limits": { 00:28:17.851 "rw_ios_per_sec": 0, 00:28:17.851 "rw_mbytes_per_sec": 0, 00:28:17.851 "r_mbytes_per_sec": 0, 00:28:17.851 "w_mbytes_per_sec": 0 00:28:17.851 }, 00:28:17.851 "claimed": false, 00:28:17.851 "zoned": false, 00:28:17.851 "supported_io_types": { 00:28:17.851 "read": true, 00:28:17.851 "write": true, 00:28:17.851 "unmap": false, 00:28:17.851 "write_zeroes": true, 00:28:17.851 "flush": true, 00:28:17.851 "reset": true, 00:28:17.851 "compare": true, 00:28:17.851 "compare_and_write": true, 00:28:17.851 "abort": true, 00:28:17.851 "nvme_admin": true, 00:28:17.851 "nvme_io": true 00:28:17.851 }, 00:28:17.851 "memory_domains": [ 00:28:17.851 { 00:28:17.851 "dma_device_id": "system", 00:28:17.851 "dma_device_type": 1 00:28:17.851 } 00:28:17.851 ], 00:28:17.851 "driver_specific": { 00:28:17.851 "nvme": [ 00:28:17.851 { 00:28:17.851 "trid": { 00:28:17.851 "trtype": "TCP", 00:28:17.851 "adrfam": "IPv4", 00:28:17.851 "traddr": "10.0.0.2", 00:28:17.851 "trsvcid": "4420", 00:28:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:17.851 }, 00:28:17.851 "ctrlr_data": { 00:28:17.851 "cntlid": 1, 00:28:17.851 "vendor_id": "0x8086", 00:28:17.851 "model_number": "SPDK bdev Controller", 00:28:17.851 "serial_number": "00000000000000000000", 00:28:17.851 "firmware_revision": "24.05.1", 00:28:17.851 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:17.851 "oacs": { 00:28:17.851 "security": 0, 00:28:17.851 "format": 0, 00:28:17.851 "firmware": 0, 00:28:17.851 "ns_manage": 0 00:28:17.851 }, 00:28:17.851 "multi_ctrlr": true, 00:28:17.851 "ana_reporting": false 00:28:17.851 }, 00:28:17.851 "vs": { 00:28:17.851 "nvme_version": "1.3" 00:28:17.851 }, 00:28:17.851 "ns_data": { 00:28:17.851 "id": 1, 00:28:17.851 "can_share": true 00:28:17.851 } 00:28:17.851 } 00:28:17.851 ], 00:28:17.851 "mp_policy": "active_passive" 00:28:17.851 } 00:28:17.851 } 00:28:17.851 ] 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.851 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:17.851 [2024-07-13 20:16:05.380232] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:17.851 [2024-07-13 20:16:05.380335] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c0b90 (9): Bad file descriptor 00:28:18.109 [2024-07-13 20:16:05.512026] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 [ 00:28:18.109 { 00:28:18.109 "name": "nvme0n1", 00:28:18.109 "aliases": [ 00:28:18.109 "b1c9ecd2-809f-4eea-959c-fecd47aeea15" 00:28:18.109 ], 00:28:18.109 "product_name": "NVMe disk", 00:28:18.109 "block_size": 512, 00:28:18.109 "num_blocks": 2097152, 00:28:18.109 "uuid": "b1c9ecd2-809f-4eea-959c-fecd47aeea15", 00:28:18.109 "assigned_rate_limits": { 00:28:18.109 "rw_ios_per_sec": 0, 00:28:18.109 "rw_mbytes_per_sec": 0, 00:28:18.109 "r_mbytes_per_sec": 0, 00:28:18.109 "w_mbytes_per_sec": 0 00:28:18.109 }, 00:28:18.109 "claimed": false, 00:28:18.109 "zoned": false, 00:28:18.109 "supported_io_types": { 00:28:18.109 "read": true, 00:28:18.109 "write": true, 00:28:18.109 "unmap": false, 00:28:18.109 "write_zeroes": true, 00:28:18.109 "flush": true, 00:28:18.109 "reset": true, 00:28:18.109 "compare": true, 00:28:18.109 "compare_and_write": true, 00:28:18.109 "abort": true, 00:28:18.109 "nvme_admin": true, 00:28:18.109 "nvme_io": true 00:28:18.109 }, 00:28:18.109 "memory_domains": [ 00:28:18.109 { 00:28:18.109 "dma_device_id": "system", 00:28:18.109 "dma_device_type": 1 00:28:18.109 } 00:28:18.109 ], 00:28:18.109 "driver_specific": { 00:28:18.109 "nvme": [ 00:28:18.109 { 00:28:18.109 "trid": { 00:28:18.109 "trtype": "TCP", 00:28:18.109 "adrfam": "IPv4", 00:28:18.109 "traddr": "10.0.0.2", 00:28:18.109 "trsvcid": "4420", 00:28:18.109 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.109 }, 00:28:18.109 "ctrlr_data": { 00:28:18.109 "cntlid": 2, 00:28:18.109 "vendor_id": "0x8086", 00:28:18.109 "model_number": "SPDK bdev Controller", 00:28:18.109 "serial_number": "00000000000000000000", 00:28:18.109 "firmware_revision": "24.05.1", 00:28:18.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.109 "oacs": { 00:28:18.109 "security": 0, 00:28:18.109 "format": 0, 00:28:18.109 "firmware": 0, 00:28:18.109 "ns_manage": 0 00:28:18.109 }, 00:28:18.109 "multi_ctrlr": true, 00:28:18.109 "ana_reporting": false 00:28:18.109 }, 00:28:18.109 "vs": { 00:28:18.109 "nvme_version": "1.3" 00:28:18.109 }, 00:28:18.109 "ns_data": { 00:28:18.109 "id": 1, 00:28:18.109 "can_share": true 00:28:18.109 } 00:28:18.109 } 00:28:18.109 ], 00:28:18.109 "mp_policy": "active_passive" 00:28:18.109 } 00:28:18.109 } 00:28:18.109 ] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dbxcX6tT2p 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dbxcX6tT2p 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 [2024-07-13 20:16:05.560796] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:18.109 [2024-07-13 20:16:05.560999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dbxcX6tT2p 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 [2024-07-13 20:16:05.568814] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.dbxcX6tT2p 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 [2024-07-13 20:16:05.576829] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:18.109 [2024-07-13 20:16:05.576913] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:18.109 nvme0n1 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.109 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.109 [ 00:28:18.109 { 00:28:18.109 "name": "nvme0n1", 00:28:18.109 "aliases": [ 00:28:18.109 "b1c9ecd2-809f-4eea-959c-fecd47aeea15" 00:28:18.109 ], 00:28:18.109 "product_name": "NVMe disk", 00:28:18.109 "block_size": 512, 00:28:18.109 "num_blocks": 2097152, 00:28:18.109 "uuid": "b1c9ecd2-809f-4eea-959c-fecd47aeea15", 00:28:18.109 "assigned_rate_limits": { 00:28:18.109 "rw_ios_per_sec": 0, 00:28:18.109 "rw_mbytes_per_sec": 0, 00:28:18.109 "r_mbytes_per_sec": 0, 00:28:18.109 "w_mbytes_per_sec": 0 00:28:18.109 }, 00:28:18.109 "claimed": false, 00:28:18.109 "zoned": false, 00:28:18.109 "supported_io_types": { 00:28:18.109 "read": true, 00:28:18.109 "write": true, 00:28:18.109 "unmap": false, 00:28:18.109 "write_zeroes": true, 00:28:18.109 "flush": true, 00:28:18.109 "reset": true, 00:28:18.109 "compare": true, 00:28:18.109 "compare_and_write": true, 00:28:18.109 "abort": true, 00:28:18.110 "nvme_admin": true, 00:28:18.110 "nvme_io": true 00:28:18.110 }, 00:28:18.110 "memory_domains": [ 00:28:18.110 { 00:28:18.110 "dma_device_id": "system", 00:28:18.110 "dma_device_type": 1 00:28:18.110 } 00:28:18.110 ], 00:28:18.110 "driver_specific": { 00:28:18.110 "nvme": [ 00:28:18.110 { 00:28:18.110 "trid": { 00:28:18.110 "trtype": "TCP", 00:28:18.110 "adrfam": "IPv4", 00:28:18.110 "traddr": "10.0.0.2", 00:28:18.110 "trsvcid": "4421", 00:28:18.110 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:18.110 }, 00:28:18.110 "ctrlr_data": { 00:28:18.110 "cntlid": 3, 00:28:18.110 "vendor_id": "0x8086", 00:28:18.110 "model_number": "SPDK bdev Controller", 00:28:18.110 "serial_number": "00000000000000000000", 00:28:18.110 "firmware_revision": "24.05.1", 00:28:18.110 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:18.110 "oacs": { 00:28:18.110 "security": 0, 00:28:18.110 "format": 0, 00:28:18.110 "firmware": 0, 00:28:18.110 "ns_manage": 0 00:28:18.110 }, 00:28:18.110 "multi_ctrlr": true, 00:28:18.110 "ana_reporting": false 00:28:18.110 }, 00:28:18.110 "vs": { 00:28:18.110 "nvme_version": "1.3" 00:28:18.110 }, 00:28:18.110 "ns_data": { 00:28:18.110 "id": 1, 00:28:18.110 "can_share": true 00:28:18.110 } 00:28:18.110 } 00:28:18.110 ], 00:28:18.110 "mp_policy": "active_passive" 00:28:18.110 } 00:28:18.110 } 00:28:18.110 ] 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.dbxcX6tT2p 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:18.110 rmmod nvme_tcp 00:28:18.110 rmmod nvme_fabrics 00:28:18.110 rmmod nvme_keyring 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3288261 ']' 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3288261 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 3288261 ']' 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 3288261 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:18.110 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3288261 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3288261' 00:28:18.367 killing process with pid 3288261 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 3288261 00:28:18.367 [2024-07-13 20:16:05.767586] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:18.367 [2024-07-13 20:16:05.767624] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 3288261 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:18.367 20:16:05 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.914 20:16:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:20.914 00:28:20.914 real 0m5.404s 00:28:20.914 user 0m2.035s 00:28:20.914 sys 0m1.747s 00:28:20.914 20:16:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:20.914 20:16:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:20.914 ************************************ 00:28:20.914 END TEST nvmf_async_init 00:28:20.914 ************************************ 00:28:20.914 20:16:08 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:20.914 20:16:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:20.914 20:16:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:20.914 20:16:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:20.914 ************************************ 00:28:20.914 START TEST dma 00:28:20.914 ************************************ 00:28:20.914 20:16:08 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:20.914 * Looking for test storage... 00:28:20.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.914 20:16:08 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.914 20:16:08 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.914 20:16:08 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.914 20:16:08 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.914 20:16:08 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.914 20:16:08 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.914 20:16:08 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.914 20:16:08 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:20.914 20:16:08 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.914 20:16:08 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.914 20:16:08 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:20.914 20:16:08 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:20.914 00:28:20.914 real 0m0.062s 00:28:20.914 user 0m0.029s 00:28:20.914 sys 0m0.038s 00:28:20.914 20:16:08 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:20.914 20:16:08 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:20.914 ************************************ 00:28:20.914 END TEST dma 00:28:20.914 ************************************ 00:28:20.914 20:16:08 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:20.914 20:16:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:20.914 20:16:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:20.914 20:16:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:20.914 ************************************ 00:28:20.914 START TEST nvmf_identify 00:28:20.914 ************************************ 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:20.914 * Looking for test storage... 00:28:20.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.914 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.915 20:16:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:22.821 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:22.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:22.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:22.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:22.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:28:22.822 00:28:22.822 --- 10.0.0.2 ping statistics --- 00:28:22.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.822 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:28:22.822 00:28:22.822 --- 10.0.0.1 ping statistics --- 00:28:22.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.822 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3290378 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3290378 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 3290378 ']' 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:22.822 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.082 [2024-07-13 20:16:10.495741] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:23.082 [2024-07-13 20:16:10.495834] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.082 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.082 [2024-07-13 20:16:10.561596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.082 [2024-07-13 20:16:10.650721] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.082 [2024-07-13 20:16:10.650774] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.082 [2024-07-13 20:16:10.650802] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.082 [2024-07-13 20:16:10.650817] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.082 [2024-07-13 20:16:10.650828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.082 [2024-07-13 20:16:10.650913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.082 [2024-07-13 20:16:10.650980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.082 [2024-07-13 20:16:10.651026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.082 [2024-07-13 20:16:10.651029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 [2024-07-13 20:16:10.783585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 Malloc0 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 [2024-07-13 20:16:10.859442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.344 [ 00:28:23.344 { 00:28:23.344 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:23.344 "subtype": "Discovery", 00:28:23.344 "listen_addresses": [ 00:28:23.344 { 00:28:23.344 "trtype": "TCP", 00:28:23.344 "adrfam": "IPv4", 00:28:23.344 "traddr": "10.0.0.2", 00:28:23.344 "trsvcid": "4420" 00:28:23.344 } 00:28:23.344 ], 00:28:23.344 "allow_any_host": true, 00:28:23.344 "hosts": [] 00:28:23.344 }, 00:28:23.344 { 00:28:23.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.344 "subtype": "NVMe", 00:28:23.344 "listen_addresses": [ 00:28:23.344 { 00:28:23.344 "trtype": "TCP", 00:28:23.344 "adrfam": "IPv4", 00:28:23.344 "traddr": "10.0.0.2", 00:28:23.344 "trsvcid": "4420" 00:28:23.344 } 00:28:23.344 ], 00:28:23.344 "allow_any_host": true, 00:28:23.344 "hosts": [], 00:28:23.344 "serial_number": "SPDK00000000000001", 00:28:23.344 "model_number": "SPDK bdev Controller", 00:28:23.344 "max_namespaces": 32, 00:28:23.344 "min_cntlid": 1, 00:28:23.344 "max_cntlid": 65519, 00:28:23.344 "namespaces": [ 00:28:23.344 { 00:28:23.344 "nsid": 1, 00:28:23.344 "bdev_name": "Malloc0", 00:28:23.344 "name": "Malloc0", 00:28:23.344 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:23.344 "eui64": "ABCDEF0123456789", 00:28:23.344 "uuid": "93e149ce-2a94-4551-a152-de145e3b6a84" 00:28:23.344 } 00:28:23.344 ] 00:28:23.344 } 00:28:23.344 ] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.344 20:16:10 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:23.344 [2024-07-13 20:16:10.900273] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:23.344 [2024-07-13 20:16:10.900318] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290528 ] 00:28:23.344 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.344 [2024-07-13 20:16:10.935222] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:23.344 [2024-07-13 20:16:10.935278] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:23.344 [2024-07-13 20:16:10.935288] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:23.345 [2024-07-13 20:16:10.935311] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:23.345 [2024-07-13 20:16:10.935325] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:23.345 [2024-07-13 20:16:10.938929] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:23.345 [2024-07-13 20:16:10.939001] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf3c120 0 00:28:23.345 [2024-07-13 20:16:10.946901] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:23.345 [2024-07-13 20:16:10.946920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:23.345 [2024-07-13 20:16:10.946929] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:23.345 [2024-07-13 20:16:10.946935] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:23.345 [2024-07-13 20:16:10.946999] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.947012] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.947019] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.947038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:23.345 [2024-07-13 20:16:10.947065] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.954883] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.954900] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.954908] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.954916] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.954931] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:23.345 [2024-07-13 20:16:10.954956] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:23.345 [2024-07-13 20:16:10.954966] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:23.345 [2024-07-13 20:16:10.954990] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955003] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955011] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.955022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.955046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.955200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.955215] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.955222] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955229] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.955243] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:23.345 [2024-07-13 20:16:10.955257] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:23.345 [2024-07-13 20:16:10.955270] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955277] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955284] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.955295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.955316] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.955446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.955458] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.955464] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955471] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.955480] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:23.345 [2024-07-13 20:16:10.955494] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:23.345 [2024-07-13 20:16:10.955506] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955513] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955520] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.955530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.955551] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.955736] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.955751] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.955758] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955764] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.955773] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:23.345 [2024-07-13 20:16:10.955790] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955799] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.955806] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.955836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.955858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.956049] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.956065] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.956072] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956078] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.956087] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:23.345 [2024-07-13 20:16:10.956095] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:23.345 [2024-07-13 20:16:10.956108] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:23.345 [2024-07-13 20:16:10.956218] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:23.345 [2024-07-13 20:16:10.956227] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:23.345 [2024-07-13 20:16:10.956241] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956249] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956255] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.956265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.956286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.956443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.956459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.956465] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956472] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.956481] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:23.345 [2024-07-13 20:16:10.956498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956513] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.956523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.956544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.956686] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.956701] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.956708] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956715] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.956723] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:23.345 [2024-07-13 20:16:10.956731] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:23.345 [2024-07-13 20:16:10.956749] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:23.345 [2024-07-13 20:16:10.956763] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:23.345 [2024-07-13 20:16:10.956781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.956789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.345 [2024-07-13 20:16:10.956800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.345 [2024-07-13 20:16:10.956836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.345 [2024-07-13 20:16:10.957039] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.345 [2024-07-13 20:16:10.957056] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.345 [2024-07-13 20:16:10.957062] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.957069] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf3c120): datao=0, datal=4096, cccid=0 00:28:23.345 [2024-07-13 20:16:10.957077] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf951f0) on tqpair(0xf3c120): expected_datao=0, payload_size=4096 00:28:23.345 [2024-07-13 20:16:10.957085] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.957096] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.957105] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.957181] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.345 [2024-07-13 20:16:10.957192] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.345 [2024-07-13 20:16:10.957199] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.345 [2024-07-13 20:16:10.957205] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.345 [2024-07-13 20:16:10.957222] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:23.346 [2024-07-13 20:16:10.957232] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:23.346 [2024-07-13 20:16:10.957240] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:23.346 [2024-07-13 20:16:10.957249] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:23.346 [2024-07-13 20:16:10.957257] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:23.346 [2024-07-13 20:16:10.957265] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:23.346 [2024-07-13 20:16:10.957279] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:23.346 [2024-07-13 20:16:10.957291] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957299] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957321] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.957332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:23.346 [2024-07-13 20:16:10.957353] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.346 [2024-07-13 20:16:10.957536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.346 [2024-07-13 20:16:10.957551] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.346 [2024-07-13 20:16:10.957558] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957569] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf951f0) on tqpair=0xf3c120 00:28:23.346 [2024-07-13 20:16:10.957583] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957590] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.957607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.346 [2024-07-13 20:16:10.957617] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957631] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.957640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.346 [2024-07-13 20:16:10.957649] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957656] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957662] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.957687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.346 [2024-07-13 20:16:10.957696] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957703] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957709] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.957717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.346 [2024-07-13 20:16:10.957726] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:23.346 [2024-07-13 20:16:10.957744] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:23.346 [2024-07-13 20:16:10.957757] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.957764] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.957774] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-07-13 20:16:10.957795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf951f0, cid 0, qid 0 00:28:23.346 [2024-07-13 20:16:10.957821] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95350, cid 1, qid 0 00:28:23.346 [2024-07-13 20:16:10.957830] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf954b0, cid 2, qid 0 00:28:23.346 [2024-07-13 20:16:10.957837] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.346 [2024-07-13 20:16:10.957845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95770, cid 4, qid 0 00:28:23.346 [2024-07-13 20:16:10.958023] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.346 [2024-07-13 20:16:10.958037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.346 [2024-07-13 20:16:10.958044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958051] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95770) on tqpair=0xf3c120 00:28:23.346 [2024-07-13 20:16:10.958060] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:23.346 [2024-07-13 20:16:10.958069] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:23.346 [2024-07-13 20:16:10.958090] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958100] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.958111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-07-13 20:16:10.958131] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95770, cid 4, qid 0 00:28:23.346 [2024-07-13 20:16:10.958291] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.346 [2024-07-13 20:16:10.958306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.346 [2024-07-13 20:16:10.958313] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958319] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf3c120): datao=0, datal=4096, cccid=4 00:28:23.346 [2024-07-13 20:16:10.958327] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf95770) on tqpair(0xf3c120): expected_datao=0, payload_size=4096 00:28:23.346 [2024-07-13 20:16:10.958334] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958368] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958377] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.346 [2024-07-13 20:16:10.958520] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.346 [2024-07-13 20:16:10.958527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95770) on tqpair=0xf3c120 00:28:23.346 [2024-07-13 20:16:10.958551] nvme_ctrlr.c:4038:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:23.346 [2024-07-13 20:16:10.958586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.958608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.346 [2024-07-13 20:16:10.958619] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958627] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.958633] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf3c120) 00:28:23.346 [2024-07-13 20:16:10.958642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.346 [2024-07-13 20:16:10.958683] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95770, cid 4, qid 0 00:28:23.346 [2024-07-13 20:16:10.958694] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf958d0, cid 5, qid 0 00:28:23.346 [2024-07-13 20:16:10.962877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.346 [2024-07-13 20:16:10.962894] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.346 [2024-07-13 20:16:10.962901] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.962907] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf3c120): datao=0, datal=1024, cccid=4 00:28:23.346 [2024-07-13 20:16:10.962915] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf95770) on tqpair(0xf3c120): expected_datao=0, payload_size=1024 00:28:23.346 [2024-07-13 20:16:10.962922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.962931] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.962939] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.962947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.346 [2024-07-13 20:16:10.962960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.346 [2024-07-13 20:16:10.962967] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.346 [2024-07-13 20:16:10.962974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf958d0) on tqpair=0xf3c120 00:28:23.610 [2024-07-13 20:16:11.002901] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.610 [2024-07-13 20:16:11.002922] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.610 [2024-07-13 20:16:11.002931] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.002938] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95770) on tqpair=0xf3c120 00:28:23.610 [2024-07-13 20:16:11.002962] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.002972] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf3c120) 00:28:23.610 [2024-07-13 20:16:11.002984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.610 [2024-07-13 20:16:11.003016] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95770, cid 4, qid 0 00:28:23.610 [2024-07-13 20:16:11.003175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.610 [2024-07-13 20:16:11.003191] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.610 [2024-07-13 20:16:11.003198] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003205] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf3c120): datao=0, datal=3072, cccid=4 00:28:23.610 [2024-07-13 20:16:11.003213] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf95770) on tqpair(0xf3c120): expected_datao=0, payload_size=3072 00:28:23.610 [2024-07-13 20:16:11.003220] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003231] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003238] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003313] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.610 [2024-07-13 20:16:11.003325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.610 [2024-07-13 20:16:11.003332] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003339] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95770) on tqpair=0xf3c120 00:28:23.610 [2024-07-13 20:16:11.003353] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003362] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf3c120) 00:28:23.610 [2024-07-13 20:16:11.003373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.610 [2024-07-13 20:16:11.003400] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95770, cid 4, qid 0 00:28:23.610 [2024-07-13 20:16:11.003554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.610 [2024-07-13 20:16:11.003566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.610 [2024-07-13 20:16:11.003573] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003579] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf3c120): datao=0, datal=8, cccid=4 00:28:23.610 [2024-07-13 20:16:11.003587] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf95770) on tqpair(0xf3c120): expected_datao=0, payload_size=8 00:28:23.610 [2024-07-13 20:16:11.003595] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003604] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.003612] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.044008] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.610 [2024-07-13 20:16:11.044028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.610 [2024-07-13 20:16:11.044044] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.610 [2024-07-13 20:16:11.044053] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95770) on tqpair=0xf3c120 00:28:23.610 ===================================================== 00:28:23.610 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:23.610 ===================================================== 00:28:23.610 Controller Capabilities/Features 00:28:23.610 ================================ 00:28:23.610 Vendor ID: 0000 00:28:23.610 Subsystem Vendor ID: 0000 00:28:23.610 Serial Number: .................... 00:28:23.610 Model Number: ........................................ 00:28:23.610 Firmware Version: 24.05.1 00:28:23.610 Recommended Arb Burst: 0 00:28:23.610 IEEE OUI Identifier: 00 00 00 00:28:23.610 Multi-path I/O 00:28:23.610 May have multiple subsystem ports: No 00:28:23.610 May have multiple controllers: No 00:28:23.610 Associated with SR-IOV VF: No 00:28:23.610 Max Data Transfer Size: 131072 00:28:23.610 Max Number of Namespaces: 0 00:28:23.610 Max Number of I/O Queues: 1024 00:28:23.610 NVMe Specification Version (VS): 1.3 00:28:23.610 NVMe Specification Version (Identify): 1.3 00:28:23.610 Maximum Queue Entries: 128 00:28:23.610 Contiguous Queues Required: Yes 00:28:23.610 Arbitration Mechanisms Supported 00:28:23.610 Weighted Round Robin: Not Supported 00:28:23.610 Vendor Specific: Not Supported 00:28:23.610 Reset Timeout: 15000 ms 00:28:23.610 Doorbell Stride: 4 bytes 00:28:23.610 NVM Subsystem Reset: Not Supported 00:28:23.610 Command Sets Supported 00:28:23.610 NVM Command Set: Supported 00:28:23.610 Boot Partition: Not Supported 00:28:23.610 Memory Page Size Minimum: 4096 bytes 00:28:23.610 Memory Page Size Maximum: 4096 bytes 00:28:23.610 Persistent Memory Region: Not Supported 00:28:23.610 Optional Asynchronous Events Supported 00:28:23.610 Namespace Attribute Notices: Not Supported 00:28:23.610 Firmware Activation Notices: Not Supported 00:28:23.610 ANA Change Notices: Not Supported 00:28:23.610 PLE Aggregate Log Change Notices: Not Supported 00:28:23.610 LBA Status Info Alert Notices: Not Supported 00:28:23.610 EGE Aggregate Log Change Notices: Not Supported 00:28:23.610 Normal NVM Subsystem Shutdown event: Not Supported 00:28:23.610 Zone Descriptor Change Notices: Not Supported 00:28:23.610 Discovery Log Change Notices: Supported 00:28:23.610 Controller Attributes 00:28:23.610 128-bit Host Identifier: Not Supported 00:28:23.610 Non-Operational Permissive Mode: Not Supported 00:28:23.610 NVM Sets: Not Supported 00:28:23.610 Read Recovery Levels: Not Supported 00:28:23.610 Endurance Groups: Not Supported 00:28:23.610 Predictable Latency Mode: Not Supported 00:28:23.610 Traffic Based Keep ALive: Not Supported 00:28:23.610 Namespace Granularity: Not Supported 00:28:23.610 SQ Associations: Not Supported 00:28:23.610 UUID List: Not Supported 00:28:23.610 Multi-Domain Subsystem: Not Supported 00:28:23.610 Fixed Capacity Management: Not Supported 00:28:23.610 Variable Capacity Management: Not Supported 00:28:23.610 Delete Endurance Group: Not Supported 00:28:23.610 Delete NVM Set: Not Supported 00:28:23.610 Extended LBA Formats Supported: Not Supported 00:28:23.610 Flexible Data Placement Supported: Not Supported 00:28:23.610 00:28:23.610 Controller Memory Buffer Support 00:28:23.610 ================================ 00:28:23.610 Supported: No 00:28:23.610 00:28:23.610 Persistent Memory Region Support 00:28:23.610 ================================ 00:28:23.610 Supported: No 00:28:23.611 00:28:23.611 Admin Command Set Attributes 00:28:23.611 ============================ 00:28:23.611 Security Send/Receive: Not Supported 00:28:23.611 Format NVM: Not Supported 00:28:23.611 Firmware Activate/Download: Not Supported 00:28:23.611 Namespace Management: Not Supported 00:28:23.611 Device Self-Test: Not Supported 00:28:23.611 Directives: Not Supported 00:28:23.611 NVMe-MI: Not Supported 00:28:23.611 Virtualization Management: Not Supported 00:28:23.611 Doorbell Buffer Config: Not Supported 00:28:23.611 Get LBA Status Capability: Not Supported 00:28:23.611 Command & Feature Lockdown Capability: Not Supported 00:28:23.611 Abort Command Limit: 1 00:28:23.611 Async Event Request Limit: 4 00:28:23.611 Number of Firmware Slots: N/A 00:28:23.611 Firmware Slot 1 Read-Only: N/A 00:28:23.611 Firmware Activation Without Reset: N/A 00:28:23.611 Multiple Update Detection Support: N/A 00:28:23.611 Firmware Update Granularity: No Information Provided 00:28:23.611 Per-Namespace SMART Log: No 00:28:23.611 Asymmetric Namespace Access Log Page: Not Supported 00:28:23.611 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:23.611 Command Effects Log Page: Not Supported 00:28:23.611 Get Log Page Extended Data: Supported 00:28:23.611 Telemetry Log Pages: Not Supported 00:28:23.611 Persistent Event Log Pages: Not Supported 00:28:23.611 Supported Log Pages Log Page: May Support 00:28:23.611 Commands Supported & Effects Log Page: Not Supported 00:28:23.611 Feature Identifiers & Effects Log Page:May Support 00:28:23.611 NVMe-MI Commands & Effects Log Page: May Support 00:28:23.611 Data Area 4 for Telemetry Log: Not Supported 00:28:23.611 Error Log Page Entries Supported: 128 00:28:23.611 Keep Alive: Not Supported 00:28:23.611 00:28:23.611 NVM Command Set Attributes 00:28:23.611 ========================== 00:28:23.611 Submission Queue Entry Size 00:28:23.611 Max: 1 00:28:23.611 Min: 1 00:28:23.611 Completion Queue Entry Size 00:28:23.611 Max: 1 00:28:23.611 Min: 1 00:28:23.611 Number of Namespaces: 0 00:28:23.611 Compare Command: Not Supported 00:28:23.611 Write Uncorrectable Command: Not Supported 00:28:23.611 Dataset Management Command: Not Supported 00:28:23.611 Write Zeroes Command: Not Supported 00:28:23.611 Set Features Save Field: Not Supported 00:28:23.611 Reservations: Not Supported 00:28:23.611 Timestamp: Not Supported 00:28:23.611 Copy: Not Supported 00:28:23.611 Volatile Write Cache: Not Present 00:28:23.611 Atomic Write Unit (Normal): 1 00:28:23.611 Atomic Write Unit (PFail): 1 00:28:23.611 Atomic Compare & Write Unit: 1 00:28:23.611 Fused Compare & Write: Supported 00:28:23.611 Scatter-Gather List 00:28:23.611 SGL Command Set: Supported 00:28:23.611 SGL Keyed: Supported 00:28:23.611 SGL Bit Bucket Descriptor: Not Supported 00:28:23.611 SGL Metadata Pointer: Not Supported 00:28:23.611 Oversized SGL: Not Supported 00:28:23.611 SGL Metadata Address: Not Supported 00:28:23.611 SGL Offset: Supported 00:28:23.611 Transport SGL Data Block: Not Supported 00:28:23.611 Replay Protected Memory Block: Not Supported 00:28:23.611 00:28:23.611 Firmware Slot Information 00:28:23.611 ========================= 00:28:23.611 Active slot: 0 00:28:23.611 00:28:23.611 00:28:23.611 Error Log 00:28:23.611 ========= 00:28:23.611 00:28:23.611 Active Namespaces 00:28:23.611 ================= 00:28:23.611 Discovery Log Page 00:28:23.611 ================== 00:28:23.611 Generation Counter: 2 00:28:23.611 Number of Records: 2 00:28:23.611 Record Format: 0 00:28:23.611 00:28:23.611 Discovery Log Entry 0 00:28:23.611 ---------------------- 00:28:23.611 Transport Type: 3 (TCP) 00:28:23.611 Address Family: 1 (IPv4) 00:28:23.611 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:23.611 Entry Flags: 00:28:23.611 Duplicate Returned Information: 1 00:28:23.611 Explicit Persistent Connection Support for Discovery: 1 00:28:23.611 Transport Requirements: 00:28:23.611 Secure Channel: Not Required 00:28:23.611 Port ID: 0 (0x0000) 00:28:23.611 Controller ID: 65535 (0xffff) 00:28:23.611 Admin Max SQ Size: 128 00:28:23.611 Transport Service Identifier: 4420 00:28:23.611 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:23.611 Transport Address: 10.0.0.2 00:28:23.611 Discovery Log Entry 1 00:28:23.611 ---------------------- 00:28:23.611 Transport Type: 3 (TCP) 00:28:23.611 Address Family: 1 (IPv4) 00:28:23.611 Subsystem Type: 2 (NVM Subsystem) 00:28:23.611 Entry Flags: 00:28:23.611 Duplicate Returned Information: 0 00:28:23.611 Explicit Persistent Connection Support for Discovery: 0 00:28:23.611 Transport Requirements: 00:28:23.611 Secure Channel: Not Required 00:28:23.611 Port ID: 0 (0x0000) 00:28:23.611 Controller ID: 65535 (0xffff) 00:28:23.611 Admin Max SQ Size: 128 00:28:23.611 Transport Service Identifier: 4420 00:28:23.611 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:23.611 Transport Address: 10.0.0.2 [2024-07-13 20:16:11.044165] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:23.611 [2024-07-13 20:16:11.044190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.611 [2024-07-13 20:16:11.044203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.611 [2024-07-13 20:16:11.044213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.611 [2024-07-13 20:16:11.044223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.611 [2024-07-13 20:16:11.044241] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044250] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044256] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.611 [2024-07-13 20:16:11.044268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.611 [2024-07-13 20:16:11.044308] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.611 [2024-07-13 20:16:11.044452] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.611 [2024-07-13 20:16:11.044468] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.611 [2024-07-13 20:16:11.044475] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.611 [2024-07-13 20:16:11.044494] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044502] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.611 [2024-07-13 20:16:11.044519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.611 [2024-07-13 20:16:11.044546] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.611 [2024-07-13 20:16:11.044706] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.611 [2024-07-13 20:16:11.044721] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.611 [2024-07-13 20:16:11.044728] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044735] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.611 [2024-07-13 20:16:11.044743] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:23.611 [2024-07-13 20:16:11.044752] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:23.611 [2024-07-13 20:16:11.044768] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044777] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.044784] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.611 [2024-07-13 20:16:11.044794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.611 [2024-07-13 20:16:11.044829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.611 [2024-07-13 20:16:11.045014] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.611 [2024-07-13 20:16:11.045030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.611 [2024-07-13 20:16:11.045041] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.045049] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.611 [2024-07-13 20:16:11.045067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.045076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.045083] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.611 [2024-07-13 20:16:11.045094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.611 [2024-07-13 20:16:11.045115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.611 [2024-07-13 20:16:11.045298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.611 [2024-07-13 20:16:11.045313] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.611 [2024-07-13 20:16:11.045320] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.045327] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.611 [2024-07-13 20:16:11.045343] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.045353] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.611 [2024-07-13 20:16:11.045359] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.611 [2024-07-13 20:16:11.045370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.611 [2024-07-13 20:16:11.045404] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.611 [2024-07-13 20:16:11.045552] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.045568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.045575] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.045582] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.045599] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.045608] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.045615] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.612 [2024-07-13 20:16:11.045625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.045646] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.612 [2024-07-13 20:16:11.045789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.045801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.045808] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.045815] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.045831] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.045840] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.045847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.612 [2024-07-13 20:16:11.045857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.045884] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.612 [2024-07-13 20:16:11.046069] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.046081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.046088] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.046115] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046125] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.612 [2024-07-13 20:16:11.046142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.046177] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.612 [2024-07-13 20:16:11.046375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.046391] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.046398] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046405] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.046421] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.612 [2024-07-13 20:16:11.046448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.046469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.612 [2024-07-13 20:16:11.046597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.046612] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.046619] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.046642] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046658] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.612 [2024-07-13 20:16:11.046669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.046689] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.612 [2024-07-13 20:16:11.046823] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.046835] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.046842] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.046849] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.050886] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.050900] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.050908] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf3c120) 00:28:23.612 [2024-07-13 20:16:11.050919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.050942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf95610, cid 3, qid 0 00:28:23.612 [2024-07-13 20:16:11.051143] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.051159] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.051166] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.051173] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xf95610) on tqpair=0xf3c120 00:28:23.612 [2024-07-13 20:16:11.051190] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:23.612 00:28:23.612 20:16:11 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:23.612 [2024-07-13 20:16:11.084518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:23.612 [2024-07-13 20:16:11.084562] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290530 ] 00:28:23.612 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.612 [2024-07-13 20:16:11.116692] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:23.612 [2024-07-13 20:16:11.116743] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:23.612 [2024-07-13 20:16:11.116752] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:23.612 [2024-07-13 20:16:11.116766] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:23.612 [2024-07-13 20:16:11.116778] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:23.612 [2024-07-13 20:16:11.119900] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:23.612 [2024-07-13 20:16:11.119939] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1383120 0 00:28:23.612 [2024-07-13 20:16:11.126878] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:23.612 [2024-07-13 20:16:11.126898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:23.612 [2024-07-13 20:16:11.126906] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:23.612 [2024-07-13 20:16:11.126912] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:23.612 [2024-07-13 20:16:11.126962] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.126975] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.126982] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.612 [2024-07-13 20:16:11.126997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:23.612 [2024-07-13 20:16:11.127023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.612 [2024-07-13 20:16:11.134881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.134898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.134906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.134913] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.612 [2024-07-13 20:16:11.134932] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:23.612 [2024-07-13 20:16:11.134958] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:23.612 [2024-07-13 20:16:11.134968] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:23.612 [2024-07-13 20:16:11.134988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.134997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.612 [2024-07-13 20:16:11.135015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.135043] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.612 [2024-07-13 20:16:11.135216] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.135231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.135238] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135245] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.612 [2024-07-13 20:16:11.135259] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:23.612 [2024-07-13 20:16:11.135274] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:23.612 [2024-07-13 20:16:11.135287] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135301] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.612 [2024-07-13 20:16:11.135312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.612 [2024-07-13 20:16:11.135334] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.612 [2024-07-13 20:16:11.135476] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.612 [2024-07-13 20:16:11.135492] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.612 [2024-07-13 20:16:11.135499] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135506] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.612 [2024-07-13 20:16:11.135516] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:23.612 [2024-07-13 20:16:11.135531] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:23.612 [2024-07-13 20:16:11.135543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135551] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.612 [2024-07-13 20:16:11.135557] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.135568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.613 [2024-07-13 20:16:11.135589] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.135723] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.135738] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.135746] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.135752] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.613 [2024-07-13 20:16:11.135762] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:23.613 [2024-07-13 20:16:11.135780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.135789] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.135796] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.135806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.613 [2024-07-13 20:16:11.135827] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.135973] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.135991] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.135999] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136006] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.613 [2024-07-13 20:16:11.136015] nvme_ctrlr.c:3751:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:23.613 [2024-07-13 20:16:11.136024] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:23.613 [2024-07-13 20:16:11.136038] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:23.613 [2024-07-13 20:16:11.136148] nvme_ctrlr.c:3944:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:23.613 [2024-07-13 20:16:11.136156] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:23.613 [2024-07-13 20:16:11.136184] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136192] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136198] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.136208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.613 [2024-07-13 20:16:11.136230] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.136402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.136418] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.136426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136432] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.613 [2024-07-13 20:16:11.136443] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:23.613 [2024-07-13 20:16:11.136460] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136469] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136475] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.136486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.613 [2024-07-13 20:16:11.136507] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.136641] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.136653] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.136660] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136667] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.613 [2024-07-13 20:16:11.136677] nvme_ctrlr.c:3786:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:23.613 [2024-07-13 20:16:11.136685] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:23.613 [2024-07-13 20:16:11.136698] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:23.613 [2024-07-13 20:16:11.136713] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:23.613 [2024-07-13 20:16:11.136728] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.136740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.136751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.613 [2024-07-13 20:16:11.136773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.137040] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.613 [2024-07-13 20:16:11.137054] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.613 [2024-07-13 20:16:11.137062] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137068] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=4096, cccid=0 00:28:23.613 [2024-07-13 20:16:11.137076] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dc1f0) on tqpair(0x1383120): expected_datao=0, payload_size=4096 00:28:23.613 [2024-07-13 20:16:11.137084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137109] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137118] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137210] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.137222] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.137229] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137236] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.613 [2024-07-13 20:16:11.137252] nvme_ctrlr.c:1986:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:23.613 [2024-07-13 20:16:11.137262] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:23.613 [2024-07-13 20:16:11.137270] nvme_ctrlr.c:1993:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:23.613 [2024-07-13 20:16:11.137277] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:23.613 [2024-07-13 20:16:11.137285] nvme_ctrlr.c:2032:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:23.613 [2024-07-13 20:16:11.137293] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:23.613 [2024-07-13 20:16:11.137307] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:23.613 [2024-07-13 20:16:11.137320] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137334] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.137345] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:23.613 [2024-07-13 20:16:11.137367] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.137507] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.137519] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.137527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137534] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc1f0) on tqpair=0x1383120 00:28:23.613 [2024-07-13 20:16:11.137546] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137560] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.137573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.613 [2024-07-13 20:16:11.137585] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137598] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.137607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.613 [2024-07-13 20:16:11.137617] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137624] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137630] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.137639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.613 [2024-07-13 20:16:11.137664] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137677] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.137686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.613 [2024-07-13 20:16:11.137695] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:23.613 [2024-07-13 20:16:11.137713] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:23.613 [2024-07-13 20:16:11.137725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.137732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.613 [2024-07-13 20:16:11.137743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.613 [2024-07-13 20:16:11.137765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc1f0, cid 0, qid 0 00:28:23.613 [2024-07-13 20:16:11.137791] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc350, cid 1, qid 0 00:28:23.613 [2024-07-13 20:16:11.137799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc4b0, cid 2, qid 0 00:28:23.613 [2024-07-13 20:16:11.137807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc610, cid 3, qid 0 00:28:23.613 [2024-07-13 20:16:11.137814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.613 [2024-07-13 20:16:11.138011] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.613 [2024-07-13 20:16:11.138027] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.613 [2024-07-13 20:16:11.138034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.613 [2024-07-13 20:16:11.138041] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.614 [2024-07-13 20:16:11.138051] nvme_ctrlr.c:2904:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:23.614 [2024-07-13 20:16:11.138060] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.138074] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.138085] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.138097] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138108] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.614 [2024-07-13 20:16:11.138126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:23.614 [2024-07-13 20:16:11.138148] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.614 [2024-07-13 20:16:11.138315] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.614 [2024-07-13 20:16:11.138328] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.614 [2024-07-13 20:16:11.138336] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138343] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.614 [2024-07-13 20:16:11.138413] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.138432] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.138447] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.614 [2024-07-13 20:16:11.138481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.614 [2024-07-13 20:16:11.138502] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.614 [2024-07-13 20:16:11.138714] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.614 [2024-07-13 20:16:11.138730] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.614 [2024-07-13 20:16:11.138737] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138744] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=4096, cccid=4 00:28:23.614 [2024-07-13 20:16:11.138752] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dc770) on tqpair(0x1383120): expected_datao=0, payload_size=4096 00:28:23.614 [2024-07-13 20:16:11.138760] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138777] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.138786] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.181886] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.614 [2024-07-13 20:16:11.181906] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.614 [2024-07-13 20:16:11.181914] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.181921] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.614 [2024-07-13 20:16:11.181938] nvme_ctrlr.c:4570:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:23.614 [2024-07-13 20:16:11.181960] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.181978] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.181992] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.614 [2024-07-13 20:16:11.182012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.614 [2024-07-13 20:16:11.182036] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.614 [2024-07-13 20:16:11.182229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.614 [2024-07-13 20:16:11.182249] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.614 [2024-07-13 20:16:11.182257] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182263] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=4096, cccid=4 00:28:23.614 [2024-07-13 20:16:11.182272] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dc770) on tqpair(0x1383120): expected_datao=0, payload_size=4096 00:28:23.614 [2024-07-13 20:16:11.182280] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182290] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182298] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182323] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.614 [2024-07-13 20:16:11.182334] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.614 [2024-07-13 20:16:11.182341] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.614 [2024-07-13 20:16:11.182370] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182390] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182404] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.614 [2024-07-13 20:16:11.182423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.614 [2024-07-13 20:16:11.182446] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.614 [2024-07-13 20:16:11.182597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.614 [2024-07-13 20:16:11.182613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.614 [2024-07-13 20:16:11.182620] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182627] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=4096, cccid=4 00:28:23.614 [2024-07-13 20:16:11.182635] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dc770) on tqpair(0x1383120): expected_datao=0, payload_size=4096 00:28:23.614 [2024-07-13 20:16:11.182642] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182653] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182660] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182690] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.614 [2024-07-13 20:16:11.182702] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.614 [2024-07-13 20:16:11.182708] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182715] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.614 [2024-07-13 20:16:11.182730] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182745] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182762] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182773] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182786] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182795] nvme_ctrlr.c:2992:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:23.614 [2024-07-13 20:16:11.182803] nvme_ctrlr.c:1486:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:23.614 [2024-07-13 20:16:11.182812] nvme_ctrlr.c:1492:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:23.614 [2024-07-13 20:16:11.182834] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182844] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.614 [2024-07-13 20:16:11.182855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.614 [2024-07-13 20:16:11.182874] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182884] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.614 [2024-07-13 20:16:11.182890] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1383120) 00:28:23.614 [2024-07-13 20:16:11.182900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.614 [2024-07-13 20:16:11.182925] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.614 [2024-07-13 20:16:11.182937] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc8d0, cid 5, qid 0 00:28:23.614 [2024-07-13 20:16:11.183117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.183131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.183138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183145] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.183156] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.183166] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.183173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183180] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc8d0) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.183197] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183207] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.183217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.183253] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc8d0, cid 5, qid 0 00:28:23.615 [2024-07-13 20:16:11.183477] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.183493] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.183500] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183507] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc8d0) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.183524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183534] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.183545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.183566] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc8d0, cid 5, qid 0 00:28:23.615 [2024-07-13 20:16:11.183700] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.183716] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.183724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183731] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc8d0) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.183748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183758] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.183769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.183789] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc8d0, cid 5, qid 0 00:28:23.615 [2024-07-13 20:16:11.183920] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.183935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.183942] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183949] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc8d0) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.183969] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.183980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.183990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.184002] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184009] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.184019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.184030] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184037] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.184047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.184058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184066] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1383120) 00:28:23.615 [2024-07-13 20:16:11.184075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.615 [2024-07-13 20:16:11.184097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc8d0, cid 5, qid 0 00:28:23.615 [2024-07-13 20:16:11.184108] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc770, cid 4, qid 0 00:28:23.615 [2024-07-13 20:16:11.184116] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dca30, cid 6, qid 0 00:28:23.615 [2024-07-13 20:16:11.184124] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dcb90, cid 7, qid 0 00:28:23.615 [2024-07-13 20:16:11.184356] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.615 [2024-07-13 20:16:11.184369] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.615 [2024-07-13 20:16:11.184376] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184383] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=8192, cccid=5 00:28:23.615 [2024-07-13 20:16:11.184391] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dc8d0) on tqpair(0x1383120): expected_datao=0, payload_size=8192 00:28:23.615 [2024-07-13 20:16:11.184398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184423] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184434] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.615 [2024-07-13 20:16:11.184452] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.615 [2024-07-13 20:16:11.184458] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184465] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=512, cccid=4 00:28:23.615 [2024-07-13 20:16:11.184473] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dc770) on tqpair(0x1383120): expected_datao=0, payload_size=512 00:28:23.615 [2024-07-13 20:16:11.184480] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184489] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184497] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184505] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.615 [2024-07-13 20:16:11.184514] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.615 [2024-07-13 20:16:11.184521] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184527] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=512, cccid=6 00:28:23.615 [2024-07-13 20:16:11.184535] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dca30) on tqpair(0x1383120): expected_datao=0, payload_size=512 00:28:23.615 [2024-07-13 20:16:11.184543] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184551] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184559] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184567] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:23.615 [2024-07-13 20:16:11.184576] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:23.615 [2024-07-13 20:16:11.184583] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184589] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1383120): datao=0, datal=4096, cccid=7 00:28:23.615 [2024-07-13 20:16:11.184597] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13dcb90) on tqpair(0x1383120): expected_datao=0, payload_size=4096 00:28:23.615 [2024-07-13 20:16:11.184604] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184614] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184621] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184632] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.184642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.184649] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184656] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc8d0) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.184676] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.184687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.184694] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184716] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc770) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.184732] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.184743] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.184749] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184756] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dca30) on tqpair=0x1383120 00:28:23.615 [2024-07-13 20:16:11.184770] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.615 [2024-07-13 20:16:11.184784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.615 [2024-07-13 20:16:11.184791] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.615 [2024-07-13 20:16:11.184812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dcb90) on tqpair=0x1383120 00:28:23.615 ===================================================== 00:28:23.615 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.615 ===================================================== 00:28:23.615 Controller Capabilities/Features 00:28:23.615 ================================ 00:28:23.615 Vendor ID: 8086 00:28:23.615 Subsystem Vendor ID: 8086 00:28:23.615 Serial Number: SPDK00000000000001 00:28:23.615 Model Number: SPDK bdev Controller 00:28:23.615 Firmware Version: 24.05.1 00:28:23.615 Recommended Arb Burst: 6 00:28:23.615 IEEE OUI Identifier: e4 d2 5c 00:28:23.615 Multi-path I/O 00:28:23.615 May have multiple subsystem ports: Yes 00:28:23.615 May have multiple controllers: Yes 00:28:23.615 Associated with SR-IOV VF: No 00:28:23.615 Max Data Transfer Size: 131072 00:28:23.615 Max Number of Namespaces: 32 00:28:23.615 Max Number of I/O Queues: 127 00:28:23.615 NVMe Specification Version (VS): 1.3 00:28:23.615 NVMe Specification Version (Identify): 1.3 00:28:23.615 Maximum Queue Entries: 128 00:28:23.615 Contiguous Queues Required: Yes 00:28:23.615 Arbitration Mechanisms Supported 00:28:23.615 Weighted Round Robin: Not Supported 00:28:23.615 Vendor Specific: Not Supported 00:28:23.615 Reset Timeout: 15000 ms 00:28:23.615 Doorbell Stride: 4 bytes 00:28:23.615 NVM Subsystem Reset: Not Supported 00:28:23.615 Command Sets Supported 00:28:23.615 NVM Command Set: Supported 00:28:23.615 Boot Partition: Not Supported 00:28:23.615 Memory Page Size Minimum: 4096 bytes 00:28:23.616 Memory Page Size Maximum: 4096 bytes 00:28:23.616 Persistent Memory Region: Not Supported 00:28:23.616 Optional Asynchronous Events Supported 00:28:23.616 Namespace Attribute Notices: Supported 00:28:23.616 Firmware Activation Notices: Not Supported 00:28:23.616 ANA Change Notices: Not Supported 00:28:23.616 PLE Aggregate Log Change Notices: Not Supported 00:28:23.616 LBA Status Info Alert Notices: Not Supported 00:28:23.616 EGE Aggregate Log Change Notices: Not Supported 00:28:23.616 Normal NVM Subsystem Shutdown event: Not Supported 00:28:23.616 Zone Descriptor Change Notices: Not Supported 00:28:23.616 Discovery Log Change Notices: Not Supported 00:28:23.616 Controller Attributes 00:28:23.616 128-bit Host Identifier: Supported 00:28:23.616 Non-Operational Permissive Mode: Not Supported 00:28:23.616 NVM Sets: Not Supported 00:28:23.616 Read Recovery Levels: Not Supported 00:28:23.616 Endurance Groups: Not Supported 00:28:23.616 Predictable Latency Mode: Not Supported 00:28:23.616 Traffic Based Keep ALive: Not Supported 00:28:23.616 Namespace Granularity: Not Supported 00:28:23.616 SQ Associations: Not Supported 00:28:23.616 UUID List: Not Supported 00:28:23.616 Multi-Domain Subsystem: Not Supported 00:28:23.616 Fixed Capacity Management: Not Supported 00:28:23.616 Variable Capacity Management: Not Supported 00:28:23.616 Delete Endurance Group: Not Supported 00:28:23.616 Delete NVM Set: Not Supported 00:28:23.616 Extended LBA Formats Supported: Not Supported 00:28:23.616 Flexible Data Placement Supported: Not Supported 00:28:23.616 00:28:23.616 Controller Memory Buffer Support 00:28:23.616 ================================ 00:28:23.616 Supported: No 00:28:23.616 00:28:23.616 Persistent Memory Region Support 00:28:23.616 ================================ 00:28:23.616 Supported: No 00:28:23.616 00:28:23.616 Admin Command Set Attributes 00:28:23.616 ============================ 00:28:23.616 Security Send/Receive: Not Supported 00:28:23.616 Format NVM: Not Supported 00:28:23.616 Firmware Activate/Download: Not Supported 00:28:23.616 Namespace Management: Not Supported 00:28:23.616 Device Self-Test: Not Supported 00:28:23.616 Directives: Not Supported 00:28:23.616 NVMe-MI: Not Supported 00:28:23.616 Virtualization Management: Not Supported 00:28:23.616 Doorbell Buffer Config: Not Supported 00:28:23.616 Get LBA Status Capability: Not Supported 00:28:23.616 Command & Feature Lockdown Capability: Not Supported 00:28:23.616 Abort Command Limit: 4 00:28:23.616 Async Event Request Limit: 4 00:28:23.616 Number of Firmware Slots: N/A 00:28:23.616 Firmware Slot 1 Read-Only: N/A 00:28:23.616 Firmware Activation Without Reset: N/A 00:28:23.616 Multiple Update Detection Support: N/A 00:28:23.616 Firmware Update Granularity: No Information Provided 00:28:23.616 Per-Namespace SMART Log: No 00:28:23.616 Asymmetric Namespace Access Log Page: Not Supported 00:28:23.616 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:23.616 Command Effects Log Page: Supported 00:28:23.616 Get Log Page Extended Data: Supported 00:28:23.616 Telemetry Log Pages: Not Supported 00:28:23.616 Persistent Event Log Pages: Not Supported 00:28:23.616 Supported Log Pages Log Page: May Support 00:28:23.616 Commands Supported & Effects Log Page: Not Supported 00:28:23.616 Feature Identifiers & Effects Log Page:May Support 00:28:23.616 NVMe-MI Commands & Effects Log Page: May Support 00:28:23.616 Data Area 4 for Telemetry Log: Not Supported 00:28:23.616 Error Log Page Entries Supported: 128 00:28:23.616 Keep Alive: Supported 00:28:23.616 Keep Alive Granularity: 10000 ms 00:28:23.616 00:28:23.616 NVM Command Set Attributes 00:28:23.616 ========================== 00:28:23.616 Submission Queue Entry Size 00:28:23.616 Max: 64 00:28:23.616 Min: 64 00:28:23.616 Completion Queue Entry Size 00:28:23.616 Max: 16 00:28:23.616 Min: 16 00:28:23.616 Number of Namespaces: 32 00:28:23.616 Compare Command: Supported 00:28:23.616 Write Uncorrectable Command: Not Supported 00:28:23.616 Dataset Management Command: Supported 00:28:23.616 Write Zeroes Command: Supported 00:28:23.616 Set Features Save Field: Not Supported 00:28:23.616 Reservations: Supported 00:28:23.616 Timestamp: Not Supported 00:28:23.616 Copy: Supported 00:28:23.616 Volatile Write Cache: Present 00:28:23.616 Atomic Write Unit (Normal): 1 00:28:23.616 Atomic Write Unit (PFail): 1 00:28:23.616 Atomic Compare & Write Unit: 1 00:28:23.616 Fused Compare & Write: Supported 00:28:23.616 Scatter-Gather List 00:28:23.616 SGL Command Set: Supported 00:28:23.616 SGL Keyed: Supported 00:28:23.616 SGL Bit Bucket Descriptor: Not Supported 00:28:23.616 SGL Metadata Pointer: Not Supported 00:28:23.616 Oversized SGL: Not Supported 00:28:23.616 SGL Metadata Address: Not Supported 00:28:23.616 SGL Offset: Supported 00:28:23.616 Transport SGL Data Block: Not Supported 00:28:23.616 Replay Protected Memory Block: Not Supported 00:28:23.616 00:28:23.616 Firmware Slot Information 00:28:23.616 ========================= 00:28:23.616 Active slot: 1 00:28:23.616 Slot 1 Firmware Revision: 24.05.1 00:28:23.616 00:28:23.616 00:28:23.616 Commands Supported and Effects 00:28:23.616 ============================== 00:28:23.616 Admin Commands 00:28:23.616 -------------- 00:28:23.616 Get Log Page (02h): Supported 00:28:23.616 Identify (06h): Supported 00:28:23.616 Abort (08h): Supported 00:28:23.616 Set Features (09h): Supported 00:28:23.616 Get Features (0Ah): Supported 00:28:23.616 Asynchronous Event Request (0Ch): Supported 00:28:23.616 Keep Alive (18h): Supported 00:28:23.616 I/O Commands 00:28:23.616 ------------ 00:28:23.616 Flush (00h): Supported LBA-Change 00:28:23.616 Write (01h): Supported LBA-Change 00:28:23.616 Read (02h): Supported 00:28:23.616 Compare (05h): Supported 00:28:23.616 Write Zeroes (08h): Supported LBA-Change 00:28:23.616 Dataset Management (09h): Supported LBA-Change 00:28:23.616 Copy (19h): Supported LBA-Change 00:28:23.616 Unknown (79h): Supported LBA-Change 00:28:23.616 Unknown (7Ah): Supported 00:28:23.616 00:28:23.616 Error Log 00:28:23.616 ========= 00:28:23.616 00:28:23.616 Arbitration 00:28:23.616 =========== 00:28:23.616 Arbitration Burst: 1 00:28:23.616 00:28:23.616 Power Management 00:28:23.616 ================ 00:28:23.616 Number of Power States: 1 00:28:23.616 Current Power State: Power State #0 00:28:23.616 Power State #0: 00:28:23.616 Max Power: 0.00 W 00:28:23.616 Non-Operational State: Operational 00:28:23.616 Entry Latency: Not Reported 00:28:23.616 Exit Latency: Not Reported 00:28:23.616 Relative Read Throughput: 0 00:28:23.616 Relative Read Latency: 0 00:28:23.616 Relative Write Throughput: 0 00:28:23.616 Relative Write Latency: 0 00:28:23.616 Idle Power: Not Reported 00:28:23.616 Active Power: Not Reported 00:28:23.616 Non-Operational Permissive Mode: Not Supported 00:28:23.616 00:28:23.616 Health Information 00:28:23.616 ================== 00:28:23.616 Critical Warnings: 00:28:23.616 Available Spare Space: OK 00:28:23.616 Temperature: OK 00:28:23.616 Device Reliability: OK 00:28:23.616 Read Only: No 00:28:23.616 Volatile Memory Backup: OK 00:28:23.616 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:23.616 Temperature Threshold: [2024-07-13 20:16:11.184954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.616 [2024-07-13 20:16:11.184967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1383120) 00:28:23.616 [2024-07-13 20:16:11.184978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.616 [2024-07-13 20:16:11.185001] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dcb90, cid 7, qid 0 00:28:23.616 [2024-07-13 20:16:11.185178] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.616 [2024-07-13 20:16:11.185194] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.616 [2024-07-13 20:16:11.185201] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.616 [2024-07-13 20:16:11.185208] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dcb90) on tqpair=0x1383120 00:28:23.616 [2024-07-13 20:16:11.185248] nvme_ctrlr.c:4234:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:23.616 [2024-07-13 20:16:11.185270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.616 [2024-07-13 20:16:11.185282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.616 [2024-07-13 20:16:11.185292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.616 [2024-07-13 20:16:11.185302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.616 [2024-07-13 20:16:11.185330] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.616 [2024-07-13 20:16:11.185338] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.616 [2024-07-13 20:16:11.185344] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1383120) 00:28:23.616 [2024-07-13 20:16:11.185355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.616 [2024-07-13 20:16:11.185377] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc610, cid 3, qid 0 00:28:23.616 [2024-07-13 20:16:11.185557] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.616 [2024-07-13 20:16:11.185570] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.616 [2024-07-13 20:16:11.185577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.616 [2024-07-13 20:16:11.185584] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc610) on tqpair=0x1383120 00:28:23.616 [2024-07-13 20:16:11.185597] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.616 [2024-07-13 20:16:11.185605] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.617 [2024-07-13 20:16:11.185611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1383120) 00:28:23.617 [2024-07-13 20:16:11.185622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.617 [2024-07-13 20:16:11.185648] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc610, cid 3, qid 0 00:28:23.617 [2024-07-13 20:16:11.185796] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.617 [2024-07-13 20:16:11.185811] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.617 [2024-07-13 20:16:11.185818] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.617 [2024-07-13 20:16:11.185825] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc610) on tqpair=0x1383120 00:28:23.617 [2024-07-13 20:16:11.185834] nvme_ctrlr.c:1084:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:23.617 [2024-07-13 20:16:11.185846] nvme_ctrlr.c:1087:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:23.617 [2024-07-13 20:16:11.185863] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:23.617 [2024-07-13 20:16:11.189886] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:23.617 [2024-07-13 20:16:11.189893] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1383120) 00:28:23.617 [2024-07-13 20:16:11.189904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.617 [2024-07-13 20:16:11.189927] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13dc610, cid 3, qid 0 00:28:23.617 [2024-07-13 20:16:11.190105] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:23.617 [2024-07-13 20:16:11.190118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:23.617 [2024-07-13 20:16:11.190125] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:23.617 [2024-07-13 20:16:11.190132] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x13dc610) on tqpair=0x1383120 00:28:23.617 [2024-07-13 20:16:11.190147] nvme_ctrlr.c:1206:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:23.617 0 Kelvin (-273 Celsius) 00:28:23.617 Available Spare: 0% 00:28:23.617 Available Spare Threshold: 0% 00:28:23.617 Life Percentage Used: 0% 00:28:23.617 Data Units Read: 0 00:28:23.617 Data Units Written: 0 00:28:23.617 Host Read Commands: 0 00:28:23.617 Host Write Commands: 0 00:28:23.617 Controller Busy Time: 0 minutes 00:28:23.617 Power Cycles: 0 00:28:23.617 Power On Hours: 0 hours 00:28:23.617 Unsafe Shutdowns: 0 00:28:23.617 Unrecoverable Media Errors: 0 00:28:23.617 Lifetime Error Log Entries: 0 00:28:23.617 Warning Temperature Time: 0 minutes 00:28:23.617 Critical Temperature Time: 0 minutes 00:28:23.617 00:28:23.617 Number of Queues 00:28:23.617 ================ 00:28:23.617 Number of I/O Submission Queues: 127 00:28:23.617 Number of I/O Completion Queues: 127 00:28:23.617 00:28:23.617 Active Namespaces 00:28:23.617 ================= 00:28:23.617 Namespace ID:1 00:28:23.617 Error Recovery Timeout: Unlimited 00:28:23.617 Command Set Identifier: NVM (00h) 00:28:23.617 Deallocate: Supported 00:28:23.617 Deallocated/Unwritten Error: Not Supported 00:28:23.617 Deallocated Read Value: Unknown 00:28:23.617 Deallocate in Write Zeroes: Not Supported 00:28:23.617 Deallocated Guard Field: 0xFFFF 00:28:23.617 Flush: Supported 00:28:23.617 Reservation: Supported 00:28:23.617 Namespace Sharing Capabilities: Multiple Controllers 00:28:23.617 Size (in LBAs): 131072 (0GiB) 00:28:23.617 Capacity (in LBAs): 131072 (0GiB) 00:28:23.617 Utilization (in LBAs): 131072 (0GiB) 00:28:23.617 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:23.617 EUI64: ABCDEF0123456789 00:28:23.617 UUID: 93e149ce-2a94-4551-a152-de145e3b6a84 00:28:23.617 Thin Provisioning: Not Supported 00:28:23.617 Per-NS Atomic Units: Yes 00:28:23.617 Atomic Boundary Size (Normal): 0 00:28:23.617 Atomic Boundary Size (PFail): 0 00:28:23.617 Atomic Boundary Offset: 0 00:28:23.617 Maximum Single Source Range Length: 65535 00:28:23.617 Maximum Copy Length: 65535 00:28:23.617 Maximum Source Range Count: 1 00:28:23.617 NGUID/EUI64 Never Reused: No 00:28:23.617 Namespace Write Protected: No 00:28:23.617 Number of LBA Formats: 1 00:28:23.617 Current LBA Format: LBA Format #00 00:28:23.617 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:23.617 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.617 rmmod nvme_tcp 00:28:23.617 rmmod nvme_fabrics 00:28:23.617 rmmod nvme_keyring 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3290378 ']' 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3290378 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 3290378 ']' 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 3290378 00:28:23.617 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3290378 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3290378' 00:28:23.875 killing process with pid 3290378 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 3290378 00:28:23.875 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 3290378 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.134 20:16:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.039 20:16:13 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:26.039 00:28:26.039 real 0m5.412s 00:28:26.039 user 0m4.186s 00:28:26.039 sys 0m1.925s 00:28:26.039 20:16:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:26.039 20:16:13 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:26.039 ************************************ 00:28:26.039 END TEST nvmf_identify 00:28:26.039 ************************************ 00:28:26.039 20:16:13 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:26.039 20:16:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:26.039 20:16:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:26.039 20:16:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:26.039 ************************************ 00:28:26.039 START TEST nvmf_perf 00:28:26.039 ************************************ 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:26.039 * Looking for test storage... 00:28:26.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:26.039 20:16:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:26.299 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:26.299 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:26.299 20:16:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:26.299 20:16:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.200 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.200 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.200 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.200 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:28.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:28:28.200 00:28:28.200 --- 10.0.0.2 ping statistics --- 00:28:28.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.200 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:28.200 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:28:28.200 00:28:28.200 --- 10.0.0.1 ping statistics --- 00:28:28.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.201 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3292459 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3292459 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 3292459 ']' 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:28.201 20:16:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.201 [2024-07-13 20:16:15.775413] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:28:28.201 [2024-07-13 20:16:15.775485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.201 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.201 [2024-07-13 20:16:15.838319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.458 [2024-07-13 20:16:15.924535] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.458 [2024-07-13 20:16:15.924584] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.458 [2024-07-13 20:16:15.924613] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.458 [2024-07-13 20:16:15.924624] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.458 [2024-07-13 20:16:15.924634] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.458 [2024-07-13 20:16:15.924722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.458 [2024-07-13 20:16:15.924789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:28.458 [2024-07-13 20:16:15.924836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:28.458 [2024-07-13 20:16:15.924838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:28.458 20:16:16 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:31.838 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:31.838 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:31.838 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:31.838 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:32.096 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:32.096 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:32.096 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:32.096 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:32.096 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:32.353 [2024-07-13 20:16:19.924561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.353 20:16:19 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.611 20:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:32.611 20:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.868 20:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:32.868 20:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:33.125 20:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.383 [2024-07-13 20:16:20.876000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.383 20:16:20 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:33.641 20:16:21 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:33.641 20:16:21 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:33.641 20:16:21 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:33.641 20:16:21 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:35.017 Initializing NVMe Controllers 00:28:35.017 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:35.017 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:35.017 Initialization complete. Launching workers. 00:28:35.017 ======================================================== 00:28:35.017 Latency(us) 00:28:35.017 Device Information : IOPS MiB/s Average min max 00:28:35.017 PCIE (0000:88:00.0) NSID 1 from core 0: 85092.36 332.39 375.29 24.25 8293.97 00:28:35.017 ======================================================== 00:28:35.017 Total : 85092.36 332.39 375.29 24.25 8293.97 00:28:35.017 00:28:35.017 20:16:22 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:35.017 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.948 Initializing NVMe Controllers 00:28:35.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:35.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:35.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:35.948 Initialization complete. Launching workers. 00:28:35.948 ======================================================== 00:28:35.948 Latency(us) 00:28:35.948 Device Information : IOPS MiB/s Average min max 00:28:35.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.66 0.38 10378.04 196.62 45733.34 00:28:35.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.77 0.25 15929.23 7944.52 47902.06 00:28:35.948 ======================================================== 00:28:35.948 Total : 161.43 0.63 12605.37 196.62 47902.06 00:28:35.948 00:28:36.205 20:16:23 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:36.205 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.580 Initializing NVMe Controllers 00:28:37.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:37.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:37.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:37.580 Initialization complete. Launching workers. 00:28:37.580 ======================================================== 00:28:37.580 Latency(us) 00:28:37.580 Device Information : IOPS MiB/s Average min max 00:28:37.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7758.64 30.31 4125.53 653.43 8439.84 00:28:37.580 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3920.70 15.32 8214.85 5137.74 15861.34 00:28:37.580 ======================================================== 00:28:37.580 Total : 11679.34 45.62 5498.30 653.43 15861.34 00:28:37.580 00:28:37.580 20:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:37.580 20:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:37.580 20:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:37.580 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.106 Initializing NVMe Controllers 00:28:40.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.106 Controller IO queue size 128, less than required. 00:28:40.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.106 Controller IO queue size 128, less than required. 00:28:40.106 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:40.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:40.106 Initialization complete. Launching workers. 00:28:40.106 ======================================================== 00:28:40.106 Latency(us) 00:28:40.106 Device Information : IOPS MiB/s Average min max 00:28:40.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 851.97 212.99 156283.42 97284.49 242065.87 00:28:40.106 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.98 145.24 225981.72 59697.11 342368.47 00:28:40.106 ======================================================== 00:28:40.106 Total : 1432.94 358.24 184542.12 59697.11 342368.47 00:28:40.106 00:28:40.107 20:16:27 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:40.107 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.107 No valid NVMe controllers or AIO or URING devices found 00:28:40.107 Initializing NVMe Controllers 00:28:40.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:40.107 Controller IO queue size 128, less than required. 00:28:40.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.107 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:40.107 Controller IO queue size 128, less than required. 00:28:40.107 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:40.107 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:40.107 WARNING: Some requested NVMe devices were skipped 00:28:40.364 20:16:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:40.364 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.894 Initializing NVMe Controllers 00:28:42.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.894 Controller IO queue size 128, less than required. 00:28:42.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:42.894 Controller IO queue size 128, less than required. 00:28:42.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:42.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:42.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:42.894 Initialization complete. Launching workers. 00:28:42.894 00:28:42.894 ==================== 00:28:42.894 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:42.894 TCP transport: 00:28:42.894 polls: 27055 00:28:42.894 idle_polls: 7387 00:28:42.894 sock_completions: 19668 00:28:42.894 nvme_completions: 3983 00:28:42.894 submitted_requests: 6010 00:28:42.894 queued_requests: 1 00:28:42.894 00:28:42.894 ==================== 00:28:42.894 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:42.894 TCP transport: 00:28:42.894 polls: 28108 00:28:42.894 idle_polls: 9327 00:28:42.894 sock_completions: 18781 00:28:42.894 nvme_completions: 4047 00:28:42.894 submitted_requests: 6066 00:28:42.894 queued_requests: 1 00:28:42.894 ======================================================== 00:28:42.894 Latency(us) 00:28:42.894 Device Information : IOPS MiB/s Average min max 00:28:42.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 994.40 248.60 131736.70 71339.60 214817.49 00:28:42.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1010.39 252.60 128939.38 47524.60 201504.52 00:28:42.895 ======================================================== 00:28:42.895 Total : 2004.79 501.20 130326.89 47524.60 214817.49 00:28:42.895 00:28:42.895 20:16:30 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:42.895 20:16:30 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.460 20:16:30 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:43.460 20:16:30 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:43.460 20:16:30 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=7e3eb988-7177-4ed8-84a3-a37bca0dadc3 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 7e3eb988-7177-4ed8-84a3-a37bca0dadc3 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=7e3eb988-7177-4ed8-84a3-a37bca0dadc3 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:46.736 { 00:28:46.736 "uuid": "7e3eb988-7177-4ed8-84a3-a37bca0dadc3", 00:28:46.736 "name": "lvs_0", 00:28:46.736 "base_bdev": "Nvme0n1", 00:28:46.736 "total_data_clusters": 238234, 00:28:46.736 "free_clusters": 238234, 00:28:46.736 "block_size": 512, 00:28:46.736 "cluster_size": 4194304 00:28:46.736 } 00:28:46.736 ]' 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="7e3eb988-7177-4ed8-84a3-a37bca0dadc3") .free_clusters' 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=238234 00:28:46.736 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="7e3eb988-7177-4ed8-84a3-a37bca0dadc3") .cluster_size' 00:28:46.993 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:46.993 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=952936 00:28:46.993 20:16:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 952936 00:28:46.993 952936 00:28:46.993 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:46.993 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:46.993 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e3eb988-7177-4ed8-84a3-a37bca0dadc3 lbd_0 20480 00:28:47.250 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=54e802cc-c0d3-46ba-883f-1a782ea8a96b 00:28:47.250 20:16:34 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 54e802cc-c0d3-46ba-883f-1a782ea8a96b lvs_n_0 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=7d692611-bf3f-463f-9ef4-585484fb2112 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 7d692611-bf3f-463f-9ef4-585484fb2112 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=7d692611-bf3f-463f-9ef4-585484fb2112 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:28:48.181 20:16:35 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:48.438 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:28:48.438 { 00:28:48.438 "uuid": "7e3eb988-7177-4ed8-84a3-a37bca0dadc3", 00:28:48.438 "name": "lvs_0", 00:28:48.438 "base_bdev": "Nvme0n1", 00:28:48.438 "total_data_clusters": 238234, 00:28:48.438 "free_clusters": 233114, 00:28:48.438 "block_size": 512, 00:28:48.438 "cluster_size": 4194304 00:28:48.438 }, 00:28:48.438 { 00:28:48.438 "uuid": "7d692611-bf3f-463f-9ef4-585484fb2112", 00:28:48.438 "name": "lvs_n_0", 00:28:48.438 "base_bdev": "54e802cc-c0d3-46ba-883f-1a782ea8a96b", 00:28:48.438 "total_data_clusters": 5114, 00:28:48.438 "free_clusters": 5114, 00:28:48.438 "block_size": 512, 00:28:48.438 "cluster_size": 4194304 00:28:48.438 } 00:28:48.438 ]' 00:28:48.438 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="7d692611-bf3f-463f-9ef4-585484fb2112") .free_clusters' 00:28:48.438 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=5114 00:28:48.438 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="7d692611-bf3f-463f-9ef4-585484fb2112") .cluster_size' 00:28:48.697 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:28:48.697 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=20456 00:28:48.697 20:16:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 20456 00:28:48.697 20456 00:28:48.697 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:48.697 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7d692611-bf3f-463f-9ef4-585484fb2112 lbd_nest_0 20456 00:28:48.997 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=08c7a8c4-3ec6-4868-9d60-e8092f0b87f9 00:28:48.997 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.997 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:48.997 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 08c7a8c4-3ec6-4868-9d60-e8092f0b87f9 00:28:49.255 20:16:36 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.513 20:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:49.513 20:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:49.513 20:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:49.513 20:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:49.513 20:16:37 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.513 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.730 Initializing NVMe Controllers 00:29:01.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:01.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:01.730 Initialization complete. Launching workers. 00:29:01.730 ======================================================== 00:29:01.730 Latency(us) 00:29:01.730 Device Information : IOPS MiB/s Average min max 00:29:01.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.40 0.02 21165.36 227.68 45974.29 00:29:01.730 ======================================================== 00:29:01.730 Total : 47.40 0.02 21165.36 227.68 45974.29 00:29:01.730 00:29:01.730 20:16:47 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:01.730 20:16:47 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:01.730 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.709 Initializing NVMe Controllers 00:29:11.709 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:11.709 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:11.709 Initialization complete. Launching workers. 00:29:11.709 ======================================================== 00:29:11.709 Latency(us) 00:29:11.709 Device Information : IOPS MiB/s Average min max 00:29:11.709 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.60 9.70 12893.14 4022.00 47899.26 00:29:11.709 ======================================================== 00:29:11.709 Total : 77.60 9.70 12893.14 4022.00 47899.26 00:29:11.709 00:29:11.709 20:16:57 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:11.709 20:16:57 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:11.709 20:16:57 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.709 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.684 Initializing NVMe Controllers 00:29:21.684 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.684 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.684 Initialization complete. Launching workers. 00:29:21.684 ======================================================== 00:29:21.684 Latency(us) 00:29:21.684 Device Information : IOPS MiB/s Average min max 00:29:21.684 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7095.90 3.46 4509.29 296.44 12024.47 00:29:21.684 ======================================================== 00:29:21.684 Total : 7095.90 3.46 4509.29 296.44 12024.47 00:29:21.684 00:29:21.684 20:17:07 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:21.684 20:17:07 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:21.684 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.724 Initializing NVMe Controllers 00:29:31.725 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.725 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.725 Initialization complete. Launching workers. 00:29:31.725 ======================================================== 00:29:31.725 Latency(us) 00:29:31.725 Device Information : IOPS MiB/s Average min max 00:29:31.725 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1926.81 240.85 16609.45 1177.88 37578.16 00:29:31.725 ======================================================== 00:29:31.725 Total : 1926.81 240.85 16609.45 1177.88 37578.16 00:29:31.725 00:29:31.725 20:17:18 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:31.725 20:17:18 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:31.725 20:17:18 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:31.725 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.695 Initializing NVMe Controllers 00:29:41.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:41.695 Controller IO queue size 128, less than required. 00:29:41.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:41.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:41.695 Initialization complete. Launching workers. 00:29:41.695 ======================================================== 00:29:41.695 Latency(us) 00:29:41.695 Device Information : IOPS MiB/s Average min max 00:29:41.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11516.17 5.62 11120.55 1680.01 23992.54 00:29:41.695 ======================================================== 00:29:41.695 Total : 11516.17 5.62 11120.55 1680.01 23992.54 00:29:41.695 00:29:41.695 20:17:28 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:41.695 20:17:28 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.695 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.674 Initializing NVMe Controllers 00:29:51.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:51.674 Controller IO queue size 128, less than required. 00:29:51.674 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:51.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:51.674 Initialization complete. Launching workers. 00:29:51.674 ======================================================== 00:29:51.674 Latency(us) 00:29:51.674 Device Information : IOPS MiB/s Average min max 00:29:51.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1184.80 148.10 108694.35 16249.12 237467.77 00:29:51.674 ======================================================== 00:29:51.674 Total : 1184.80 148.10 108694.35 16249.12 237467.77 00:29:51.674 00:29:51.674 20:17:39 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:51.933 20:17:39 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 08c7a8c4-3ec6-4868-9d60-e8092f0b87f9 00:29:52.867 20:17:40 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:52.868 20:17:40 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 54e802cc-c0d3-46ba-883f-1a782ea8a96b 00:29:53.435 20:17:40 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:53.435 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:53.435 rmmod nvme_tcp 00:29:53.435 rmmod nvme_fabrics 00:29:53.435 rmmod nvme_keyring 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3292459 ']' 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3292459 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 3292459 ']' 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 3292459 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3292459 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3292459' 00:29:53.694 killing process with pid 3292459 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 3292459 00:29:53.694 20:17:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 3292459 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:55.600 20:17:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.499 20:17:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:57.499 00:29:57.499 real 1m31.211s 00:29:57.499 user 5m37.763s 00:29:57.499 sys 0m15.204s 00:29:57.499 20:17:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:57.499 20:17:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:57.499 ************************************ 00:29:57.499 END TEST nvmf_perf 00:29:57.499 ************************************ 00:29:57.499 20:17:44 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:57.499 20:17:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:57.499 20:17:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:57.499 20:17:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:57.499 ************************************ 00:29:57.499 START TEST nvmf_fio_host 00:29:57.499 ************************************ 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:57.499 * Looking for test storage... 00:29:57.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.499 20:17:44 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:57.500 20:17:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:59.430 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:59.430 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:59.430 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:59.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:59.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:59.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:59.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:29:59.431 00:29:59.431 --- 10.0.0.2 ping statistics --- 00:29:59.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.431 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:29:59.431 20:17:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:59.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:59.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:29:59.431 00:29:59.431 --- 10.0.0.1 ping statistics --- 00:29:59.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:59.431 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3304427 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3304427 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 3304427 ']' 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:59.431 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.694 [2024-07-13 20:17:47.078790] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:29:59.694 [2024-07-13 20:17:47.078897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.694 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.694 [2024-07-13 20:17:47.147289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:59.694 [2024-07-13 20:17:47.234106] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:59.694 [2024-07-13 20:17:47.234169] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:59.694 [2024-07-13 20:17:47.234183] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:59.694 [2024-07-13 20:17:47.234194] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:59.694 [2024-07-13 20:17:47.234204] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:59.694 [2024-07-13 20:17:47.234268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.694 [2024-07-13 20:17:47.234331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:59.694 [2024-07-13 20:17:47.234394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.694 [2024-07-13 20:17:47.234396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.951 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:59.951 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:29:59.951 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.208 [2024-07-13 20:17:47.635594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.208 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:00.208 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.208 20:17:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:00.208 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:00.465 Malloc1 00:30:00.465 20:17:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.723 20:17:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:00.981 20:17:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.238 [2024-07-13 20:17:48.727796] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.238 20:17:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:01.496 20:17:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:01.754 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:01.754 fio-3.35 00:30:01.754 Starting 1 thread 00:30:01.754 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.284 00:30:04.284 test: (groupid=0, jobs=1): err= 0: pid=3304879: Sat Jul 13 20:17:51 2024 00:30:04.284 read: IOPS=9271, BW=36.2MiB/s (38.0MB/s)(72.7MiB/2006msec) 00:30:04.284 slat (nsec): min=1912, max=111416, avg=2469.94, stdev=1417.99 00:30:04.284 clat (usec): min=3176, max=13069, avg=7652.82, stdev=570.73 00:30:04.284 lat (usec): min=3198, max=13071, avg=7655.29, stdev=570.64 00:30:04.284 clat percentiles (usec): 00:30:04.284 | 1.00th=[ 6390], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:30:04.284 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:30:04.284 | 70.00th=[ 7898], 80.00th=[ 8094], 90.00th=[ 8291], 95.00th=[ 8455], 00:30:04.284 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[11338], 99.95th=[12256], 00:30:04.284 | 99.99th=[13042] 00:30:04.284 bw ( KiB/s): min=36376, max=37496, per=99.94%, avg=37064.00, stdev=481.24, samples=4 00:30:04.284 iops : min= 9094, max= 9374, avg=9266.00, stdev=120.31, samples=4 00:30:04.284 write: IOPS=9277, BW=36.2MiB/s (38.0MB/s)(72.7MiB/2006msec); 0 zone resets 00:30:04.284 slat (nsec): min=2049, max=89323, avg=2563.61, stdev=1126.91 00:30:04.284 clat (usec): min=1112, max=12147, avg=6115.13, stdev=511.24 00:30:04.284 lat (usec): min=1118, max=12150, avg=6117.69, stdev=511.21 00:30:04.284 clat percentiles (usec): 00:30:04.284 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:30:04.284 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:30:04.284 | 70.00th=[ 6325], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:30:04.284 | 99.00th=[ 7177], 99.50th=[ 7308], 99.90th=[10159], 99.95th=[11207], 00:30:04.284 | 99.99th=[12125] 00:30:04.284 bw ( KiB/s): min=36672, max=37328, per=99.98%, avg=37100.00, stdev=293.47, samples=4 00:30:04.284 iops : min= 9168, max= 9332, avg=9275.00, stdev=73.37, samples=4 00:30:04.284 lat (msec) : 2=0.01%, 4=0.13%, 10=99.73%, 20=0.13% 00:30:04.284 cpu : usr=55.16%, sys=37.81%, ctx=33, majf=0, minf=31 00:30:04.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:04.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:04.284 issued rwts: total=18599,18610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:04.284 00:30:04.284 Run status group 0 (all jobs): 00:30:04.284 READ: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.2MB), run=2006-2006msec 00:30:04.284 WRITE: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.2MB), run=2006-2006msec 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:04.284 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:04.285 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:04.285 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:04.285 20:17:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:04.285 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:04.285 fio-3.35 00:30:04.285 Starting 1 thread 00:30:04.285 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.815 00:30:06.815 test: (groupid=0, jobs=1): err= 0: pid=3305233: Sat Jul 13 20:17:54 2024 00:30:06.815 read: IOPS=7787, BW=122MiB/s (128MB/s)(244MiB/2007msec) 00:30:06.815 slat (usec): min=2, max=119, avg= 3.78, stdev= 1.98 00:30:06.815 clat (usec): min=2434, max=22156, avg=9840.81, stdev=2492.54 00:30:06.815 lat (usec): min=2438, max=22161, avg=9844.59, stdev=2492.56 00:30:06.815 clat percentiles (usec): 00:30:06.815 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6718], 20.00th=[ 7767], 00:30:06.815 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10159], 00:30:06.815 | 70.00th=[10814], 80.00th=[11731], 90.00th=[13435], 95.00th=[14353], 00:30:06.815 | 99.00th=[16057], 99.50th=[16581], 99.90th=[18220], 99.95th=[18744], 00:30:06.815 | 99.99th=[22152] 00:30:06.815 bw ( KiB/s): min=55904, max=74144, per=50.25%, avg=62608.00, stdev=7956.17, samples=4 00:30:06.815 iops : min= 3494, max= 4634, avg=3913.00, stdev=497.26, samples=4 00:30:06.815 write: IOPS=4462, BW=69.7MiB/s (73.1MB/s)(128MiB/1835msec); 0 zone resets 00:30:06.815 slat (usec): min=30, max=138, avg=35.01, stdev= 6.40 00:30:06.815 clat (usec): min=6059, max=24425, avg=11843.41, stdev=2367.29 00:30:06.815 lat (usec): min=6105, max=24457, avg=11878.42, stdev=2367.20 00:30:06.815 clat percentiles (usec): 00:30:06.815 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:30:06.815 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:30:06.815 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15139], 95.00th=[16581], 00:30:06.815 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[23987], 00:30:06.815 | 99.99th=[24511] 00:30:06.815 bw ( KiB/s): min=57856, max=77728, per=91.28%, avg=65176.00, stdev=8663.49, samples=4 00:30:06.815 iops : min= 3616, max= 4858, avg=4073.50, stdev=541.47, samples=4 00:30:06.815 lat (msec) : 4=0.19%, 10=43.26%, 20=56.50%, 50=0.05% 00:30:06.815 cpu : usr=71.83%, sys=24.03%, ctx=23, majf=0, minf=51 00:30:06.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:06.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:06.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:06.815 issued rwts: total=15630,8189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:06.815 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:06.815 00:30:06.815 Run status group 0 (all jobs): 00:30:06.815 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=244MiB (256MB), run=2007-2007msec 00:30:06.815 WRITE: bw=69.7MiB/s (73.1MB/s), 69.7MiB/s-69.7MiB/s (73.1MB/s-73.1MB/s), io=128MiB (134MB), run=1835-1835msec 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:30:06.815 20:17:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:10.100 Nvme0n1 00:30:10.100 20:17:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=24262650-21fd-4e7e-8e48-ab1efd08d258 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 24262650-21fd-4e7e-8e48-ab1efd08d258 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=24262650-21fd-4e7e-8e48-ab1efd08d258 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:13.379 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:13.379 { 00:30:13.379 "uuid": "24262650-21fd-4e7e-8e48-ab1efd08d258", 00:30:13.380 "name": "lvs_0", 00:30:13.380 "base_bdev": "Nvme0n1", 00:30:13.380 "total_data_clusters": 930, 00:30:13.380 "free_clusters": 930, 00:30:13.380 "block_size": 512, 00:30:13.380 "cluster_size": 1073741824 00:30:13.380 } 00:30:13.380 ]' 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="24262650-21fd-4e7e-8e48-ab1efd08d258") .free_clusters' 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=930 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="24262650-21fd-4e7e-8e48-ab1efd08d258") .cluster_size' 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=952320 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 952320 00:30:13.380 952320 00:30:13.380 20:18:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:13.637 b7a54102-867a-44ee-bd0d-30d87ff96d50 00:30:13.637 20:18:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:13.894 20:18:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:14.152 20:18:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:14.409 20:18:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:14.409 20:18:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:14.409 20:18:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:14.409 20:18:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:14.409 20:18:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:14.669 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:14.669 fio-3.35 00:30:14.669 Starting 1 thread 00:30:14.669 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.197 00:30:17.197 test: (groupid=0, jobs=1): err= 0: pid=3306623: Sat Jul 13 20:18:04 2024 00:30:17.197 read: IOPS=6151, BW=24.0MiB/s (25.2MB/s)(48.2MiB/2008msec) 00:30:17.197 slat (usec): min=2, max=150, avg= 2.68, stdev= 2.04 00:30:17.197 clat (usec): min=1007, max=171297, avg=11511.11, stdev=11519.45 00:30:17.197 lat (usec): min=1010, max=171335, avg=11513.80, stdev=11519.77 00:30:17.197 clat percentiles (msec): 00:30:17.197 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:17.197 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:30:17.197 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:17.197 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:17.197 | 99.99th=[ 171] 00:30:17.197 bw ( KiB/s): min=17368, max=27176, per=99.84%, avg=24566.00, stdev=4802.85, samples=4 00:30:17.197 iops : min= 4342, max= 6794, avg=6141.50, stdev=1200.71, samples=4 00:30:17.197 write: IOPS=6135, BW=24.0MiB/s (25.1MB/s)(48.1MiB/2008msec); 0 zone resets 00:30:17.197 slat (usec): min=2, max=126, avg= 2.81, stdev= 1.72 00:30:17.197 clat (usec): min=379, max=169357, avg=9226.89, stdev=10820.42 00:30:17.197 lat (usec): min=383, max=169365, avg=9229.70, stdev=10820.72 00:30:17.197 clat percentiles (msec): 00:30:17.197 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:30:17.197 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:17.197 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:30:17.197 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:30:17.197 | 99.99th=[ 169] 00:30:17.197 bw ( KiB/s): min=18408, max=26688, per=99.92%, avg=24522.00, stdev=4077.00, samples=4 00:30:17.197 iops : min= 4602, max= 6672, avg=6130.50, stdev=1019.25, samples=4 00:30:17.197 lat (usec) : 500=0.01%, 750=0.01% 00:30:17.197 lat (msec) : 2=0.03%, 4=0.13%, 10=59.02%, 20=40.28%, 250=0.52% 00:30:17.197 cpu : usr=57.20%, sys=37.57%, ctx=82, majf=0, minf=31 00:30:17.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:17.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:17.197 issued rwts: total=12352,12320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:17.197 00:30:17.197 Run status group 0 (all jobs): 00:30:17.197 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.2MiB (50.6MB), run=2008-2008msec 00:30:17.197 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.1MiB (50.5MB), run=2008-2008msec 00:30:17.197 20:18:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:17.197 20:18:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=a10b3487-7110-4e09-b882-b9459f0c00ad 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb a10b3487-7110-4e09-b882-b9459f0c00ad 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=a10b3487-7110-4e09-b882-b9459f0c00ad 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:18.613 20:18:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:18.613 { 00:30:18.613 "uuid": "24262650-21fd-4e7e-8e48-ab1efd08d258", 00:30:18.613 "name": "lvs_0", 00:30:18.613 "base_bdev": "Nvme0n1", 00:30:18.613 "total_data_clusters": 930, 00:30:18.613 "free_clusters": 0, 00:30:18.613 "block_size": 512, 00:30:18.613 "cluster_size": 1073741824 00:30:18.613 }, 00:30:18.613 { 00:30:18.613 "uuid": "a10b3487-7110-4e09-b882-b9459f0c00ad", 00:30:18.613 "name": "lvs_n_0", 00:30:18.613 "base_bdev": "b7a54102-867a-44ee-bd0d-30d87ff96d50", 00:30:18.613 "total_data_clusters": 237847, 00:30:18.613 "free_clusters": 237847, 00:30:18.613 "block_size": 512, 00:30:18.613 "cluster_size": 4194304 00:30:18.613 } 00:30:18.613 ]' 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="a10b3487-7110-4e09-b882-b9459f0c00ad") .free_clusters' 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=237847 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="a10b3487-7110-4e09-b882-b9459f0c00ad") .cluster_size' 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=951388 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 951388 00:30:18.613 951388 00:30:18.613 20:18:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:19.549 dfdfd1c1-018a-4fa8-953f-def68eedd451 00:30:19.549 20:18:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:19.549 20:18:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:19.806 20:18:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:20.064 20:18:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:20.323 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:20.323 fio-3.35 00:30:20.323 Starting 1 thread 00:30:20.323 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.852 00:30:22.852 test: (groupid=0, jobs=1): err= 0: pid=3307640: Sat Jul 13 20:18:10 2024 00:30:22.852 read: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(46.4MiB/2050msec) 00:30:22.852 slat (usec): min=2, max=174, avg= 2.70, stdev= 2.41 00:30:22.852 clat (usec): min=4552, max=62070, avg=12234.31, stdev=3403.73 00:30:22.852 lat (usec): min=4562, max=62073, avg=12237.01, stdev=3403.67 00:30:22.852 clat percentiles (usec): 00:30:22.852 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:30:22.852 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:30:22.852 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13566], 00:30:22.852 | 99.00th=[14353], 99.50th=[51119], 99.90th=[60556], 99.95th=[62129], 00:30:22.852 | 99.99th=[62129] 00:30:22.852 bw ( KiB/s): min=22720, max=24056, per=100.00%, avg=23614.00, stdev=624.63, samples=4 00:30:22.852 iops : min= 5680, max= 6014, avg=5903.50, stdev=156.16, samples=4 00:30:22.852 write: IOPS=5791, BW=22.6MiB/s (23.7MB/s)(46.4MiB/2050msec); 0 zone resets 00:30:22.852 slat (usec): min=2, max=154, avg= 2.84, stdev= 2.02 00:30:22.852 clat (usec): min=2381, max=58779, avg=9762.31, stdev=3339.85 00:30:22.852 lat (usec): min=2389, max=58782, avg=9765.15, stdev=3339.81 00:30:22.852 clat percentiles (usec): 00:30:22.852 | 1.00th=[ 7504], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8848], 00:30:22.852 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:30:22.852 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:30:22.852 | 99.00th=[11600], 99.50th=[50070], 99.90th=[56361], 99.95th=[57410], 00:30:22.852 | 99.99th=[58983] 00:30:22.852 bw ( KiB/s): min=23504, max=23680, per=100.00%, avg=23622.00, stdev=80.37, samples=4 00:30:22.852 iops : min= 5876, max= 5920, avg=5905.50, stdev=20.09, samples=4 00:30:22.852 lat (msec) : 4=0.05%, 10=36.85%, 20=62.57%, 50=0.02%, 100=0.51% 00:30:22.852 cpu : usr=54.03%, sys=41.09%, ctx=84, majf=0, minf=31 00:30:22.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:22.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:22.852 issued rwts: total=11872,11873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.852 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:22.852 00:30:22.852 Run status group 0 (all jobs): 00:30:22.852 READ: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=46.4MiB (48.6MB), run=2050-2050msec 00:30:22.852 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=46.4MiB (48.6MB), run=2050-2050msec 00:30:22.852 20:18:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:22.852 20:18:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:22.852 20:18:10 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:27.037 20:18:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:27.037 20:18:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:30.326 20:18:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:30.326 20:18:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:32.229 rmmod nvme_tcp 00:30:32.229 rmmod nvme_fabrics 00:30:32.229 rmmod nvme_keyring 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3304427 ']' 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3304427 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 3304427 ']' 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 3304427 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3304427 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3304427' 00:30:32.229 killing process with pid 3304427 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 3304427 00:30:32.229 20:18:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 3304427 00:30:32.488 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:32.488 20:18:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:32.488 20:18:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:32.488 20:18:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:32.488 20:18:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:32.488 20:18:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.488 20:18:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:32.488 20:18:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.394 20:18:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:34.394 00:30:34.394 real 0m37.163s 00:30:34.394 user 2m22.538s 00:30:34.394 sys 0m6.994s 00:30:34.394 20:18:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:34.394 20:18:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.394 ************************************ 00:30:34.394 END TEST nvmf_fio_host 00:30:34.394 ************************************ 00:30:34.653 20:18:22 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.653 20:18:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:34.653 20:18:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.653 20:18:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.653 ************************************ 00:30:34.653 START TEST nvmf_failover 00:30:34.653 ************************************ 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.653 * Looking for test storage... 00:30:34.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:34.653 20:18:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:36.554 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:36.555 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:36.555 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:36.555 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:36.555 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:36.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:36.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:30:36.555 00:30:36.555 --- 10.0.0.2 ping statistics --- 00:30:36.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.555 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:30:36.555 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:36.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:36.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:30:36.814 00:30:36.814 --- 10.0.0.1 ping statistics --- 00:30:36.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:36.814 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:30:36.814 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:36.814 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:36.814 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:36.814 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.814 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3311127 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3311127 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3311127 ']' 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:36.815 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:36.815 [2024-07-13 20:18:24.289910] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:36.815 [2024-07-13 20:18:24.289997] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.815 EAL: No free 2048 kB hugepages reported on node 1 00:30:36.815 [2024-07-13 20:18:24.355001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:36.815 [2024-07-13 20:18:24.443206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:36.815 [2024-07-13 20:18:24.443268] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:36.815 [2024-07-13 20:18:24.443297] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:36.815 [2024-07-13 20:18:24.443308] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:36.815 [2024-07-13 20:18:24.443318] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:36.815 [2024-07-13 20:18:24.443403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:36.815 [2024-07-13 20:18:24.443472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:36.815 [2024-07-13 20:18:24.443474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.073 20:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:37.330 [2024-07-13 20:18:24.800519] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.330 20:18:24 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:37.590 Malloc0 00:30:37.590 20:18:25 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.890 20:18:25 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:38.147 20:18:25 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.407 [2024-07-13 20:18:25.810263] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.407 20:18:25 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:38.407 [2024-07-13 20:18:26.054907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:38.667 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:38.667 [2024-07-13 20:18:26.307750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3311398 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3311398 /var/tmp/bdevperf.sock 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3311398 ']' 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:38.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:38.928 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.186 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.186 20:18:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:39.186 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.444 NVMe0n1 00:30:39.444 20:18:26 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.702 00:30:39.702 20:18:27 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3311530 00:30:39.702 20:18:27 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:39.702 20:18:27 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:40.639 20:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:40.899 [2024-07-13 20:18:28.536796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.536979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537024] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:40.899 [2024-07-13 20:18:28.537070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243a090 is same with the state(5) to be set 00:30:41.157 20:18:28 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:44.444 20:18:31 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.445 00:30:44.445 20:18:31 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.705 [2024-07-13 20:18:32.170938] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.170999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171074] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171200] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.705 [2024-07-13 20:18:32.171376] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 [2024-07-13 20:18:32.171387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 [2024-07-13 20:18:32.171398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 [2024-07-13 20:18:32.171409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 [2024-07-13 20:18:32.171419] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 [2024-07-13 20:18:32.171431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 [2024-07-13 20:18:32.171442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b610 is same with the state(5) to be set 00:30:44.706 20:18:32 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:47.994 20:18:35 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:47.994 [2024-07-13 20:18:35.435606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:47.995 20:18:35 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:48.933 20:18:36 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:49.191 [2024-07-13 20:18:36.716205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716517] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716539] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716582] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 [2024-07-13 20:18:36.716615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b980 is same with the state(5) to be set 00:30:49.191 20:18:36 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3311530 00:30:55.757 0 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3311398 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3311398 ']' 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3311398 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3311398 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3311398' 00:30:55.757 killing process with pid 3311398 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3311398 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3311398 00:30:55.757 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:55.757 [2024-07-13 20:18:26.369309] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:30:55.757 [2024-07-13 20:18:26.369387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3311398 ] 00:30:55.757 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.757 [2024-07-13 20:18:26.427788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.758 [2024-07-13 20:18:26.516152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.758 Running I/O for 15 seconds... 00:30:55.758 [2024-07-13 20:18:28.537787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.537829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.537879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.537898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.537915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.537930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.537945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.537959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.537975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.537989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.538018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.538057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.538087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.758 [2024-07-13 20:18:28.538904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.538932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.538966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.538981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.538994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.539009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.539023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.539038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.539051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.539066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.539080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.758 [2024-07-13 20:18:28.539095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.758 [2024-07-13 20:18:28.539108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.759 [2024-07-13 20:18:28.539137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.539977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.539992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.759 [2024-07-13 20:18:28.540337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.759 [2024-07-13 20:18:28.540350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.760 [2024-07-13 20:18:28.540379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.760 [2024-07-13 20:18:28.540410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.760 [2024-07-13 20:18:28.540439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.760 [2024-07-13 20:18:28.540467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.760 [2024-07-13 20:18:28.540495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.760 [2024-07-13 20:18:28.540523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77696 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77704 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77712 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77720 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77728 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77736 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77744 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77752 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.540956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.540967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77760 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.540980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.540993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77776 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77784 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77792 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76904 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76912 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76920 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76928 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76936 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76944 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76952 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76960 len:8 PRP1 0x0 PRP2 0x0 00:30:55.760 [2024-07-13 20:18:28.541553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.760 [2024-07-13 20:18:28.541566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.760 [2024-07-13 20:18:28.541577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.760 [2024-07-13 20:18:28.541588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76968 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76976 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76984 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.541960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.541971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.541982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.541994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77040 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77048 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.761 [2024-07-13 20:18:28.542316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.761 [2024-07-13 20:18:28.542328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77088 len:8 PRP1 0x0 PRP2 0x0 00:30:55.761 [2024-07-13 20:18:28.542340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542403] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1049ef0 was disconnected and freed. reset controller. 00:30:55.761 [2024-07-13 20:18:28.542421] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:55.761 [2024-07-13 20:18:28.542454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.761 [2024-07-13 20:18:28.542472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.761 [2024-07-13 20:18:28.542500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.761 [2024-07-13 20:18:28.542526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.761 [2024-07-13 20:18:28.542560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:28.542573] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.761 [2024-07-13 20:18:28.542620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102a740 (9): Bad file descriptor 00:30:55.761 [2024-07-13 20:18:28.545849] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.761 [2024-07-13 20:18:28.705344] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.761 [2024-07-13 20:18:32.172666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.761 [2024-07-13 20:18:32.172938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.761 [2024-07-13 20:18:32.172953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.172967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.172982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.172995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.762 [2024-07-13 20:18:32.173811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.762 [2024-07-13 20:18:32.173839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.762 [2024-07-13 20:18:32.173887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.762 [2024-07-13 20:18:32.173920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.762 [2024-07-13 20:18:32.173947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.762 [2024-07-13 20:18:32.173976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.173991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.762 [2024-07-13 20:18:32.174008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.762 [2024-07-13 20:18:32.174024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.763 [2024-07-13 20:18:32.174067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.763 [2024-07-13 20:18:32.174095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.763 [2024-07-13 20:18:32.174123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.174981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.174996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.763 [2024-07-13 20:18:32.175280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.763 [2024-07-13 20:18:32.175293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.764 [2024-07-13 20:18:32.175563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111584 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111592 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111600 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111608 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111616 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111624 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111632 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.175961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.175972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111640 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.175985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.175997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111648 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111656 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111664 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111672 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111680 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111688 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111696 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111704 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111712 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111720 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111728 len:8 PRP1 0x0 PRP2 0x0 00:30:55.764 [2024-07-13 20:18:32.176500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.764 [2024-07-13 20:18:32.176512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.764 [2024-07-13 20:18:32.176523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.764 [2024-07-13 20:18:32.176534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111736 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111744 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111752 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111760 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111768 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111776 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111784 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111792 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111800 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.176961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.176971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111808 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.176983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.176996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.177007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.177018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111816 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.177058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.177069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111824 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.177082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.177106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.177117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111832 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.177130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.765 [2024-07-13 20:18:32.177154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.765 [2024-07-13 20:18:32.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111840 len:8 PRP1 0x0 PRP2 0x0 00:30:55.765 [2024-07-13 20:18:32.177177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177241] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x104c130 was disconnected and freed. reset controller. 00:30:55.765 [2024-07-13 20:18:32.177261] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:55.765 [2024-07-13 20:18:32.177293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.765 [2024-07-13 20:18:32.177317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.765 [2024-07-13 20:18:32.177345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.765 [2024-07-13 20:18:32.177382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.765 [2024-07-13 20:18:32.177407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:32.177420] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.765 [2024-07-13 20:18:32.177474] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102a740 (9): Bad file descriptor 00:30:55.765 [2024-07-13 20:18:32.180730] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.765 [2024-07-13 20:18:32.214828] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.765 [2024-07-13 20:18:36.716946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.716987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.765 [2024-07-13 20:18:36.717232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.765 [2024-07-13 20:18:36.717259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.765 [2024-07-13 20:18:36.717286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.765 [2024-07-13 20:18:36.717313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.765 [2024-07-13 20:18:36.717340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.765 [2024-07-13 20:18:36.717368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.765 [2024-07-13 20:18:36.717382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.766 [2024-07-13 20:18:36.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.717982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.717996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.766 [2024-07-13 20:18:36.718459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.766 [2024-07-13 20:18:36.718471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.767 [2024-07-13 20:18:36.718760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.718977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.718991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:30568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:30688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.767 [2024-07-13 20:18:36.719746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.767 [2024-07-13 20:18:36.719758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.719786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.719814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.719842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.719904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.719933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.719963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.719978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.719992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:30096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.768 [2024-07-13 20:18:36.720339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:30800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:30832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.768 [2024-07-13 20:18:36.720814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.768 [2024-07-13 20:18:36.720881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.768 [2024-07-13 20:18:36.720894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30888 len:8 PRP1 0x0 PRP2 0x0 00:30:55.768 [2024-07-13 20:18:36.720908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.720967] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x104dff0 was disconnected and freed. reset controller. 00:30:55.768 [2024-07-13 20:18:36.720985] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:55.768 [2024-07-13 20:18:36.721016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.768 [2024-07-13 20:18:36.721034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.721049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.768 [2024-07-13 20:18:36.721063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.721076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.768 [2024-07-13 20:18:36.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.721104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:55.768 [2024-07-13 20:18:36.721117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.768 [2024-07-13 20:18:36.721131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:55.768 [2024-07-13 20:18:36.721179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x102a740 (9): Bad file descriptor 00:30:55.768 [2024-07-13 20:18:36.724403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:55.768 [2024-07-13 20:18:36.887726] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:55.768 00:30:55.768 Latency(us) 00:30:55.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.769 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:55.769 Verification LBA range: start 0x0 length 0x4000 00:30:55.769 NVMe0n1 : 15.01 8487.67 33.15 931.41 0.00 13562.33 843.47 17961.72 00:30:55.769 =================================================================================================================== 00:30:55.769 Total : 8487.67 33.15 931.41 0.00 13562.33 843.47 17961.72 00:30:55.769 Received shutdown signal, test time was about 15.000000 seconds 00:30:55.769 00:30:55.769 Latency(us) 00:30:55.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:55.769 =================================================================================================================== 00:30:55.769 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3313381 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3313381 /var/tmp/bdevperf.sock 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 3313381 ']' 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:55.769 20:18:42 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:55.769 [2024-07-13 20:18:43.209370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:55.769 20:18:43 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:56.027 [2024-07-13 20:18:43.466161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:56.027 20:18:43 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.284 NVMe0n1 00:30:56.543 20:18:43 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:56.847 00:30:56.847 20:18:44 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.105 00:30:57.105 20:18:44 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.105 20:18:44 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:57.418 20:18:44 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.675 20:18:45 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:00.967 20:18:48 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:00.967 20:18:48 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:00.967 20:18:48 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3314050 00:31:00.967 20:18:48 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:00.967 20:18:48 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3314050 00:31:01.901 0 00:31:01.901 20:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:01.901 [2024-07-13 20:18:42.742943] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:01.901 [2024-07-13 20:18:42.743046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3313381 ] 00:31:01.901 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.901 [2024-07-13 20:18:42.802670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.901 [2024-07-13 20:18:42.885586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.901 [2024-07-13 20:18:45.049644] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:01.901 [2024-07-13 20:18:45.049736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.901 [2024-07-13 20:18:45.049759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.901 [2024-07-13 20:18:45.049776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.901 [2024-07-13 20:18:45.049790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.901 [2024-07-13 20:18:45.049804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.901 [2024-07-13 20:18:45.049817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.901 [2024-07-13 20:18:45.049832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.901 [2024-07-13 20:18:45.049845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.901 [2024-07-13 20:18:45.049858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:01.901 [2024-07-13 20:18:45.049912] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:01.901 [2024-07-13 20:18:45.049943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1530740 (9): Bad file descriptor 00:31:01.901 [2024-07-13 20:18:45.054947] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.901 Running I/O for 1 seconds... 00:31:01.901 00:31:01.901 Latency(us) 00:31:01.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.901 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:01.901 Verification LBA range: start 0x0 length 0x4000 00:31:01.901 NVMe0n1 : 1.01 8583.92 33.53 0.00 0.00 14853.46 3276.80 12136.30 00:31:01.901 =================================================================================================================== 00:31:01.901 Total : 8583.92 33.53 0.00 0.00 14853.46 3276.80 12136.30 00:31:01.901 20:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:01.901 20:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:02.159 20:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.416 20:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.416 20:18:49 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:02.674 20:18:50 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.932 20:18:50 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3313381 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3313381 ']' 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3313381 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3313381 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3313381' 00:31:06.215 killing process with pid 3313381 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3313381 00:31:06.215 20:18:53 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3313381 00:31:06.474 20:18:54 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:06.474 20:18:54 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:06.733 rmmod nvme_tcp 00:31:06.733 rmmod nvme_fabrics 00:31:06.733 rmmod nvme_keyring 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3311127 ']' 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3311127 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 3311127 ']' 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 3311127 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3311127 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3311127' 00:31:06.733 killing process with pid 3311127 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 3311127 00:31:06.733 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 3311127 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:06.992 20:18:54 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.525 20:18:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:09.525 00:31:09.525 real 0m34.571s 00:31:09.525 user 1m59.390s 00:31:09.525 sys 0m6.693s 00:31:09.525 20:18:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:09.525 20:18:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:09.525 ************************************ 00:31:09.525 END TEST nvmf_failover 00:31:09.525 ************************************ 00:31:09.525 20:18:56 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:09.525 20:18:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:09.525 20:18:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:09.525 20:18:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:09.525 ************************************ 00:31:09.525 START TEST nvmf_host_discovery 00:31:09.525 ************************************ 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:09.525 * Looking for test storage... 00:31:09.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:09.525 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:09.526 20:18:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:11.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:11.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:11.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:11.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:11.457 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:11.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:11.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:31:11.458 00:31:11.458 --- 10.0.0.2 ping statistics --- 00:31:11.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.458 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:11.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:11.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:31:11.458 00:31:11.458 --- 10.0.0.1 ping statistics --- 00:31:11.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:11.458 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3316644 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3316644 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3316644 ']' 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:11.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:11.458 20:18:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.458 [2024-07-13 20:18:58.847533] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:11.458 [2024-07-13 20:18:58.847610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:11.458 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.458 [2024-07-13 20:18:58.910470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.458 [2024-07-13 20:18:58.993846] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:11.458 [2024-07-13 20:18:58.993904] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:11.458 [2024-07-13 20:18:58.993935] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:11.458 [2024-07-13 20:18:58.993946] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:11.458 [2024-07-13 20:18:58.993957] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:11.458 [2024-07-13 20:18:58.993991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.458 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:11.458 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:11.458 20:18:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:11.458 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:11.458 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.717 [2024-07-13 20:18:59.140650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.717 [2024-07-13 20:18:59.148817] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.717 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.717 null0 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.718 null1 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3316668 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3316668 /tmp/host.sock 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 3316668 ']' 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:11.718 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:11.718 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.718 [2024-07-13 20:18:59.224753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:11.718 [2024-07-13 20:18:59.224825] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3316668 ] 00:31:11.718 EAL: No free 2048 kB hugepages reported on node 1 00:31:11.718 [2024-07-13 20:18:59.287972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.976 [2024-07-13 20:18:59.376923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.976 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 [2024-07-13 20:18:59.778534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.234 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:12.495 20:18:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:13.064 [2024-07-13 20:19:00.508275] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:13.064 [2024-07-13 20:19:00.508310] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:13.064 [2024-07-13 20:19:00.508333] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.064 [2024-07-13 20:19:00.595612] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:13.064 [2024-07-13 20:19:00.698735] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.064 [2024-07-13 20:19:00.698763] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:13.322 20:19:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.580 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.581 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.838 [2024-07-13 20:19:01.355353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.838 [2024-07-13 20:19:01.355801] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:13.838 [2024-07-13 20:19:01.355840] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.838 [2024-07-13 20:19:01.483619] bdev_nvme.c:6908:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:13.838 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:14.098 20:19:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:14.098 [2024-07-13 20:19:01.542254] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:14.098 [2024-07-13 20:19:01.542277] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:14.098 [2024-07-13 20:19:01.542287] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.040 [2024-07-13 20:19:02.591643] bdev_nvme.c:6966:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:15.040 [2024-07-13 20:19:02.591687] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:15.040 [2024-07-13 20:19:02.597962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.040 [2024-07-13 20:19:02.597995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.040 [2024-07-13 20:19:02.598027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.040 [2024-07-13 20:19:02.598041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.040 [2024-07-13 20:19:02.598055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.040 [2024-07-13 20:19:02.598071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.040 [2024-07-13 20:19:02.598096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:15.040 [2024-07-13 20:19:02.598138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:15.040 [2024-07-13 20:19:02.598178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:15.040 [2024-07-13 20:19:02.607959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.040 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.040 [2024-07-13 20:19:02.618001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.040 [2024-07-13 20:19:02.618300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.040 [2024-07-13 20:19:02.618332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.040 [2024-07-13 20:19:02.618351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.040 [2024-07-13 20:19:02.618376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.040 [2024-07-13 20:19:02.618400] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.040 [2024-07-13 20:19:02.618416] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.040 [2024-07-13 20:19:02.618433] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.040 [2024-07-13 20:19:02.618456] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.040 [2024-07-13 20:19:02.628095] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.041 [2024-07-13 20:19:02.628374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.041 [2024-07-13 20:19:02.628405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.041 [2024-07-13 20:19:02.628423] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.041 [2024-07-13 20:19:02.628446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.041 [2024-07-13 20:19:02.628484] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.041 [2024-07-13 20:19:02.628504] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.041 [2024-07-13 20:19:02.628519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.041 [2024-07-13 20:19:02.628539] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.041 [2024-07-13 20:19:02.638164] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.041 [2024-07-13 20:19:02.638404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.041 [2024-07-13 20:19:02.638434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.041 [2024-07-13 20:19:02.638453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.041 [2024-07-13 20:19:02.638483] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.041 [2024-07-13 20:19:02.638507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.041 [2024-07-13 20:19:02.638523] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.041 [2024-07-13 20:19:02.638537] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.041 [2024-07-13 20:19:02.638557] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:15.041 [2024-07-13 20:19:02.648251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.041 [2024-07-13 20:19:02.648499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.041 [2024-07-13 20:19:02.648527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.041 [2024-07-13 20:19:02.648543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.041 [2024-07-13 20:19:02.648565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.041 [2024-07-13 20:19:02.649393] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.041 [2024-07-13 20:19:02.649415] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.041 [2024-07-13 20:19:02.649444] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.041 [2024-07-13 20:19:02.649475] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.041 [2024-07-13 20:19:02.658335] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.041 [2024-07-13 20:19:02.658585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.041 [2024-07-13 20:19:02.658613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.041 [2024-07-13 20:19:02.658630] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.041 [2024-07-13 20:19:02.658652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.041 [2024-07-13 20:19:02.658684] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.041 [2024-07-13 20:19:02.658708] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.041 [2024-07-13 20:19:02.658722] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.041 [2024-07-13 20:19:02.658741] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.041 [2024-07-13 20:19:02.668424] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.041 [2024-07-13 20:19:02.668644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.041 [2024-07-13 20:19:02.668672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.041 [2024-07-13 20:19:02.668690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.041 [2024-07-13 20:19:02.668711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.041 [2024-07-13 20:19:02.668757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.041 [2024-07-13 20:19:02.668776] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.041 [2024-07-13 20:19:02.668791] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.041 [2024-07-13 20:19:02.668810] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.041 [2024-07-13 20:19:02.678506] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:15.041 [2024-07-13 20:19:02.678748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.041 [2024-07-13 20:19:02.678775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2252da0 with addr=10.0.0.2, port=4420 00:31:15.041 [2024-07-13 20:19:02.678791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2252da0 is same with the state(5) to be set 00:31:15.041 [2024-07-13 20:19:02.678812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2252da0 (9): Bad file descriptor 00:31:15.041 [2024-07-13 20:19:02.678846] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:15.041 [2024-07-13 20:19:02.678863] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:15.041 [2024-07-13 20:19:02.678896] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:15.041 [2024-07-13 20:19:02.678916] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.041 [2024-07-13 20:19:02.679859] bdev_nvme.c:6771:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:15.041 [2024-07-13 20:19:02.679894] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:15.041 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.334 20:19:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 [2024-07-13 20:19:03.968830] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:16.717 [2024-07-13 20:19:03.968889] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:16.717 [2024-07-13 20:19:03.968913] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:16.717 [2024-07-13 20:19:04.095339] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:16.717 [2024-07-13 20:19:04.161646] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:16.717 [2024-07-13 20:19:04.161700] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 request: 00:31:16.717 { 00:31:16.717 "name": "nvme", 00:31:16.717 "trtype": "tcp", 00:31:16.717 "traddr": "10.0.0.2", 00:31:16.717 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.717 "adrfam": "ipv4", 00:31:16.717 "trsvcid": "8009", 00:31:16.717 "wait_for_attach": true, 00:31:16.717 "method": "bdev_nvme_start_discovery", 00:31:16.717 "req_id": 1 00:31:16.717 } 00:31:16.717 Got JSON-RPC error response 00:31:16.717 response: 00:31:16.717 { 00:31:16.717 "code": -17, 00:31:16.717 "message": "File exists" 00:31:16.717 } 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 request: 00:31:16.717 { 00:31:16.717 "name": "nvme_second", 00:31:16.717 "trtype": "tcp", 00:31:16.717 "traddr": "10.0.0.2", 00:31:16.717 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.717 "adrfam": "ipv4", 00:31:16.717 "trsvcid": "8009", 00:31:16.717 "wait_for_attach": true, 00:31:16.717 "method": "bdev_nvme_start_discovery", 00:31:16.717 "req_id": 1 00:31:16.717 } 00:31:16.717 Got JSON-RPC error response 00:31:16.717 response: 00:31:16.717 { 00:31:16.717 "code": -17, 00:31:16.717 "message": "File exists" 00:31:16.717 } 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.717 20:19:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.093 [2024-07-13 20:19:05.377108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.093 [2024-07-13 20:19:05.377148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285320 with addr=10.0.0.2, port=8010 00:31:18.093 [2024-07-13 20:19:05.377184] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:18.093 [2024-07-13 20:19:05.377197] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:18.093 [2024-07-13 20:19:05.377208] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:19.030 [2024-07-13 20:19:06.379640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.030 [2024-07-13 20:19:06.379688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2285320 with addr=10.0.0.2, port=8010 00:31:19.030 [2024-07-13 20:19:06.379709] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:19.030 [2024-07-13 20:19:06.379721] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:19.030 [2024-07-13 20:19:06.379733] bdev_nvme.c:7046:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:19.966 [2024-07-13 20:19:07.381797] bdev_nvme.c:7027:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:19.966 request: 00:31:19.966 { 00:31:19.966 "name": "nvme_second", 00:31:19.966 "trtype": "tcp", 00:31:19.966 "traddr": "10.0.0.2", 00:31:19.966 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.966 "adrfam": "ipv4", 00:31:19.966 "trsvcid": "8010", 00:31:19.966 "attach_timeout_ms": 3000, 00:31:19.966 "method": "bdev_nvme_start_discovery", 00:31:19.966 "req_id": 1 00:31:19.966 } 00:31:19.966 Got JSON-RPC error response 00:31:19.966 response: 00:31:19.966 { 00:31:19.966 "code": -110, 00:31:19.966 "message": "Connection timed out" 00:31:19.966 } 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3316668 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.966 rmmod nvme_tcp 00:31:19.966 rmmod nvme_fabrics 00:31:19.966 rmmod nvme_keyring 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3316644 ']' 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3316644 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 3316644 ']' 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 3316644 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3316644 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3316644' 00:31:19.966 killing process with pid 3316644 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 3316644 00:31:19.966 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 3316644 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:20.225 20:19:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:22.761 00:31:22.761 real 0m13.096s 00:31:22.761 user 0m19.051s 00:31:22.761 sys 0m2.783s 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.761 ************************************ 00:31:22.761 END TEST nvmf_host_discovery 00:31:22.761 ************************************ 00:31:22.761 20:19:09 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:22.761 20:19:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:22.761 20:19:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:22.761 20:19:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:22.761 ************************************ 00:31:22.761 START TEST nvmf_host_multipath_status 00:31:22.761 ************************************ 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:22.761 * Looking for test storage... 00:31:22.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.761 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:22.762 20:19:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:24.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.664 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:24.665 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:24.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:24.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:24.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:24.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:31:24.665 00:31:24.665 --- 10.0.0.2 ping statistics --- 00:31:24.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.665 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:24.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:24.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:24.665 00:31:24.665 --- 10.0.0.1 ping statistics --- 00:31:24.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:24.665 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:24.665 20:19:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3319817 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3319817 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3319817 ']' 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:24.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:24.665 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.665 [2024-07-13 20:19:12.063971] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:31:24.665 [2024-07-13 20:19:12.064045] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:24.665 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.665 [2024-07-13 20:19:12.129363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:24.665 [2024-07-13 20:19:12.219929] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.665 [2024-07-13 20:19:12.219990] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.665 [2024-07-13 20:19:12.220018] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.666 [2024-07-13 20:19:12.220030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.666 [2024-07-13 20:19:12.220039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.666 [2024-07-13 20:19:12.223889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.666 [2024-07-13 20:19:12.223901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3319817 00:31:24.924 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:25.182 [2024-07-13 20:19:12.641791] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:25.182 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:25.440 Malloc0 00:31:25.440 20:19:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:25.699 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:25.957 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.215 [2024-07-13 20:19:13.728772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.215 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:26.473 [2024-07-13 20:19:13.973463] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3319985 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3319985 /var/tmp/bdevperf.sock 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 3319985 ']' 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:26.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:26.473 20:19:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:26.731 20:19:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:26.731 20:19:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:26.731 20:19:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:26.989 20:19:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:27.557 Nvme0n1 00:31:27.557 20:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:27.815 Nvme0n1 00:31:27.815 20:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:27.815 20:19:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:30.347 20:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:30.347 20:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:30.348 20:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:30.348 20:19:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:31.755 20:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:31.755 20:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:31.755 20:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.755 20:19:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.755 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.755 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:31.755 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.755 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.016 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.016 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.016 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.016 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.275 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.275 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.275 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.275 20:19:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.533 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.533 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.533 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.533 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.792 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.792 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.792 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.792 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.050 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.050 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:33.050 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:33.308 20:19:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:33.567 20:19:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:34.504 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:34.504 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:34.504 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.504 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.762 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.762 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:34.762 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.762 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.020 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.020 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.020 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.020 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.278 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.278 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.278 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.278 20:19:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.535 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.535 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.535 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.535 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.792 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.792 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.792 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.792 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.049 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.049 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:36.049 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:36.306 20:19:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:36.563 20:19:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:37.499 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:37.499 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:37.499 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.499 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.757 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.757 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:37.757 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.757 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.015 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.015 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.015 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.015 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.273 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.273 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.273 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.273 20:19:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.530 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.530 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.530 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.531 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.788 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.788 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.788 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.788 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.047 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.047 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:39.047 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:39.305 20:19:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:39.564 20:19:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:40.503 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:40.503 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:40.503 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.503 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.761 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.761 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:40.761 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.761 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.017 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.017 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.017 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.017 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.275 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.275 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.275 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.275 20:19:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.532 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.532 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.532 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.532 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.789 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.789 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:41.789 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.789 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:42.045 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.045 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:42.045 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:42.302 20:19:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:42.561 20:19:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:43.493 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:43.493 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:43.493 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.493 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.750 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.750 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:43.750 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.750 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.008 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.008 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.008 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.008 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.266 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.266 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.266 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.266 20:19:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.524 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.524 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:44.524 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.524 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.785 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.785 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:44.785 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.785 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:45.107 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.107 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:45.107 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:45.372 20:19:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:45.630 20:19:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:46.561 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:46.562 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:46.562 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.562 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.819 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.819 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:46.819 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.819 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.077 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.077 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:47.077 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.077 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.335 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.335 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.335 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.335 20:19:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.593 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.593 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:47.593 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.593 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.852 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.852 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:47.852 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.852 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:48.110 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.110 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:48.368 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:48.368 20:19:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:48.626 20:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:48.889 20:19:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:49.825 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:49.825 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:49.825 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.825 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:50.083 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.083 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:50.083 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.083 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:50.342 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.342 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:50.342 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.342 20:19:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.600 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.600 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.600 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.600 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.858 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.858 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:50.858 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.858 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:51.116 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.116 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:51.116 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.116 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:51.375 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.375 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:51.375 20:19:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:51.633 20:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:51.891 20:19:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:52.825 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:52.825 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:52.825 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.825 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:53.084 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.084 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:53.084 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.084 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:53.342 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.342 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:53.342 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.343 20:19:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.601 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.601 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.601 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.601 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.859 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.859 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:53.859 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.859 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:54.117 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.117 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:54.117 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.117 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:54.376 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.376 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:54.376 20:19:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:54.634 20:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:54.892 20:19:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:55.825 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:55.825 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.825 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.825 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:56.083 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.083 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:56.083 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.083 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.341 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.341 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.341 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.341 20:19:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.599 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.599 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.599 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.599 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.857 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.857 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.857 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.857 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:57.115 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.115 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:57.116 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.116 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.374 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.374 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:57.374 20:19:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:57.632 20:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:57.891 20:19:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:58.883 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:58.883 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:58.883 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.883 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:59.141 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.141 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:59.141 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.141 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.400 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.400 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.400 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.400 20:19:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.665 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.665 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.665 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.665 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.923 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.923 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.923 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.923 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:00.180 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.180 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:00.180 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:00.180 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3319985 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3319985 ']' 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3319985 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3319985 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3319985' 00:32:00.437 killing process with pid 3319985 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3319985 00:32:00.437 20:19:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3319985 00:32:00.437 Connection closed with partial response: 00:32:00.437 00:32:00.437 00:32:00.697 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3319985 00:32:00.697 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:00.697 [2024-07-13 20:19:14.028425] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:00.697 [2024-07-13 20:19:14.028503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319985 ] 00:32:00.697 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.697 [2024-07-13 20:19:14.087330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.697 [2024-07-13 20:19:14.177434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.697 Running I/O for 90 seconds... 00:32:00.697 [2024-07-13 20:19:29.846590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.697 [2024-07-13 20:19:29.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.846731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.846752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.846776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.846794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.846817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.846834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.846883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.846903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.846927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.846945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.846968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.846985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.697 [2024-07-13 20:19:29.847409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:00.697 [2024-07-13 20:19:29.847430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.847467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.847778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.847822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.847877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.847917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.847962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.847979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.848979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.848997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:00.698 [2024-07-13 20:19:29.849847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.698 [2024-07-13 20:19:29.849863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.849913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.849934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.849959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.849976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.850954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.850971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.699 [2024-07-13 20:19:29.851438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.699 [2024-07-13 20:19:29.851486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.699 [2024-07-13 20:19:29.851531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.699 [2024-07-13 20:19:29.851572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.699 [2024-07-13 20:19:29.851611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.699 [2024-07-13 20:19:29.851651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:00.699 [2024-07-13 20:19:29.851675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.699 [2024-07-13 20:19:29.851692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.851731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:29.851772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.851811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.851874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.851923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.851964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.851988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.852705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:130040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.852721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.853057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.700 [2024-07-13 20:19:29.853080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.853117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:29.853136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:29.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:29.853202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.328977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.328994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.329016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.329032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.329055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.329228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:00.700 [2024-07-13 20:19:45.329259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.700 [2024-07-13 20:19:45.329276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.329653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.329990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.330024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.330043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.701 [2024-07-13 20:19:45.332901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.332968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.332986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.333008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:48584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.333025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.333047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:48616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.333064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.333086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.701 [2024-07-13 20:19:45.333103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:00.701 [2024-07-13 20:19:45.333125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.333184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.333222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.333260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.333299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.333337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.333379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:00.702 [2024-07-13 20:19:45.333395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.334445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.702 [2024-07-13 20:19:45.334471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.334515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.702 [2024-07-13 20:19:45.334533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.334569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.702 [2024-07-13 20:19:45.334585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.334606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.702 [2024-07-13 20:19:45.334622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:00.702 [2024-07-13 20:19:45.334643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:00.702 [2024-07-13 20:19:45.334659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:00.702 Received shutdown signal, test time was about 32.295069 seconds 00:32:00.702 00:32:00.702 Latency(us) 00:32:00.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.702 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:00.702 Verification LBA range: start 0x0 length 0x4000 00:32:00.702 Nvme0n1 : 32.29 7400.82 28.91 0.00 0.00 17263.95 412.63 4026531.84 00:32:00.702 =================================================================================================================== 00:32:00.702 Total : 7400.82 28.91 0.00 0.00 17263.95 412.63 4026531.84 00:32:00.702 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:00.702 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:00.702 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:00.702 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:00.702 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:00.702 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:00.959 rmmod nvme_tcp 00:32:00.959 rmmod nvme_fabrics 00:32:00.959 rmmod nvme_keyring 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3319817 ']' 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3319817 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 3319817 ']' 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 3319817 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3319817 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3319817' 00:32:00.959 killing process with pid 3319817 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 3319817 00:32:00.959 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 3319817 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:01.218 20:19:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.118 20:19:50 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:03.118 00:32:03.118 real 0m40.885s 00:32:03.118 user 1m55.261s 00:32:03.118 sys 0m13.366s 00:32:03.118 20:19:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:03.118 20:19:50 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:03.118 ************************************ 00:32:03.118 END TEST nvmf_host_multipath_status 00:32:03.118 ************************************ 00:32:03.376 20:19:50 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:03.376 20:19:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:03.376 20:19:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:03.376 20:19:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:03.376 ************************************ 00:32:03.376 START TEST nvmf_discovery_remove_ifc 00:32:03.376 ************************************ 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:03.376 * Looking for test storage... 00:32:03.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.376 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:03.377 20:19:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:05.278 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:05.278 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:05.278 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:05.278 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:05.278 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:05.279 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:05.536 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:05.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:05.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:32:05.536 00:32:05.536 --- 10.0.0.2 ping statistics --- 00:32:05.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.536 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:05.536 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:05.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:05.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:32:05.536 00:32:05.536 --- 10.0.0.1 ping statistics --- 00:32:05.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:05.536 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3326170 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3326170 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3326170 ']' 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.537 20:19:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.537 [2024-07-13 20:19:53.023359] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:05.537 [2024-07-13 20:19:53.023447] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.537 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.537 [2024-07-13 20:19:53.092782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.537 [2024-07-13 20:19:53.181603] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.537 [2024-07-13 20:19:53.181667] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.537 [2024-07-13 20:19:53.181693] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:05.537 [2024-07-13 20:19:53.181706] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:05.537 [2024-07-13 20:19:53.181718] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.537 [2024-07-13 20:19:53.181757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.794 [2024-07-13 20:19:53.336068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:05.794 [2024-07-13 20:19:53.344290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:05.794 null0 00:32:05.794 [2024-07-13 20:19:53.376225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3326194 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3326194 /tmp/host.sock 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 3326194 ']' 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:05.794 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:05.794 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.794 [2024-07-13 20:19:53.440076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:32:05.794 [2024-07-13 20:19:53.440143] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3326194 ] 00:32:06.058 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.058 [2024-07-13 20:19:53.501602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.058 [2024-07-13 20:19:53.593718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.058 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.318 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.318 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:06.318 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.318 20:19:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.261 [2024-07-13 20:19:54.865745] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:07.261 [2024-07-13 20:19:54.865787] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:07.261 [2024-07-13 20:19:54.865813] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:07.519 [2024-07-13 20:19:54.994302] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:07.519 [2024-07-13 20:19:55.096260] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:07.519 [2024-07-13 20:19:55.096337] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:07.519 [2024-07-13 20:19:55.096386] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:07.519 [2024-07-13 20:19:55.096417] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:07.519 [2024-07-13 20:19:55.096457] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.519 [2024-07-13 20:19:55.102638] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x207c900 was disconnected and freed. delete nvme_qpair. 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:07.519 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.779 20:19:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.717 20:19:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.650 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.911 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.911 20:19:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:10.846 20:19:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:11.780 20:19:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:13.171 20:20:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.171 [2024-07-13 20:20:00.537137] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:13.171 [2024-07-13 20:20:00.537211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.171 [2024-07-13 20:20:00.537231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.171 [2024-07-13 20:20:00.537248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.171 [2024-07-13 20:20:00.537262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.171 [2024-07-13 20:20:00.537275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.171 [2024-07-13 20:20:00.537301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.171 [2024-07-13 20:20:00.537314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.171 [2024-07-13 20:20:00.537328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.171 [2024-07-13 20:20:00.537341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:13.171 [2024-07-13 20:20:00.537353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:13.171 [2024-07-13 20:20:00.537365] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043990 is same with the state(5) to be set 00:32:13.171 [2024-07-13 20:20:00.547165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043990 (9): Bad file descriptor 00:32:13.171 [2024-07-13 20:20:00.557223] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:13.824 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.825 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.825 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.825 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.825 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.825 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.825 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.084 [2024-07-13 20:20:01.565893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:14.084 [2024-07-13 20:20:01.565948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2043990 with addr=10.0.0.2, port=4420 00:32:14.084 [2024-07-13 20:20:01.565966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043990 is same with the state(5) to be set 00:32:14.084 [2024-07-13 20:20:01.566010] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2043990 (9): Bad file descriptor 00:32:14.084 [2024-07-13 20:20:01.566418] bdev_nvme.c:2896:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:14.084 [2024-07-13 20:20:01.566451] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:14.084 [2024-07-13 20:20:01.566468] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:14.084 [2024-07-13 20:20:01.566486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:14.084 [2024-07-13 20:20:01.566516] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:14.084 [2024-07-13 20:20:01.566543] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:14.084 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.084 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:14.084 20:20:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.023 [2024-07-13 20:20:02.569039] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:15.023 [2024-07-13 20:20:02.569068] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:15.023 [2024-07-13 20:20:02.569082] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:15.023 [2024-07-13 20:20:02.569096] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:15.023 [2024-07-13 20:20:02.569115] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:15.023 [2024-07-13 20:20:02.569167] bdev_nvme.c:6735:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:15.023 [2024-07-13 20:20:02.569201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.023 [2024-07-13 20:20:02.569223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.023 [2024-07-13 20:20:02.569242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.023 [2024-07-13 20:20:02.569257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.023 [2024-07-13 20:20:02.569273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.023 [2024-07-13 20:20:02.569287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.023 [2024-07-13 20:20:02.569304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.023 [2024-07-13 20:20:02.569319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.023 [2024-07-13 20:20:02.569336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:15.023 [2024-07-13 20:20:02.569352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:15.023 [2024-07-13 20:20:02.569366] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:15.023 [2024-07-13 20:20:02.569609] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2042de0 (9): Bad file descriptor 00:32:15.023 [2024-07-13 20:20:02.570627] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:15.023 [2024-07-13 20:20:02.570654] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.023 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.281 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:15.281 20:20:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:16.213 20:20:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:17.152 [2024-07-13 20:20:04.620796] bdev_nvme.c:6984:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:17.152 [2024-07-13 20:20:04.620836] bdev_nvme.c:7064:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:17.152 [2024-07-13 20:20:04.620863] bdev_nvme.c:6947:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:17.152 [2024-07-13 20:20:04.708149] bdev_nvme.c:6913:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:17.152 20:20:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:17.411 [2024-07-13 20:20:04.893738] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:17.411 [2024-07-13 20:20:04.893794] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:17.411 [2024-07-13 20:20:04.893832] bdev_nvme.c:7774:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:17.411 [2024-07-13 20:20:04.893860] bdev_nvme.c:6803:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:17.411 [2024-07-13 20:20:04.893885] bdev_nvme.c:6762:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:17.411 [2024-07-13 20:20:04.899339] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x205df60 was disconnected and freed. delete nvme_qpair. 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3326194 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3326194 ']' 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3326194 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3326194 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3326194' 00:32:18.352 killing process with pid 3326194 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3326194 00:32:18.352 20:20:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3326194 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.612 rmmod nvme_tcp 00:32:18.612 rmmod nvme_fabrics 00:32:18.612 rmmod nvme_keyring 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3326170 ']' 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3326170 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 3326170 ']' 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 3326170 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3326170 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3326170' 00:32:18.612 killing process with pid 3326170 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 3326170 00:32:18.612 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 3326170 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:18.871 20:20:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.409 20:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:21.409 00:32:21.409 real 0m17.659s 00:32:21.409 user 0m25.606s 00:32:21.409 sys 0m3.045s 00:32:21.409 20:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:21.409 20:20:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.409 ************************************ 00:32:21.409 END TEST nvmf_discovery_remove_ifc 00:32:21.409 ************************************ 00:32:21.409 20:20:08 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:21.409 20:20:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:21.409 20:20:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:21.409 20:20:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:21.409 ************************************ 00:32:21.409 START TEST nvmf_identify_kernel_target 00:32:21.409 ************************************ 00:32:21.409 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:21.410 * Looking for test storage... 00:32:21.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:21.410 20:20:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.314 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:23.314 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:23.314 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:23.314 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:23.314 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:23.315 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:23.315 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:23.315 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:23.315 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:23.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:23.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:32:23.315 00:32:23.315 --- 10.0.0.2 ping statistics --- 00:32:23.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.315 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:23.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:23.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:32:23.315 00:32:23.315 --- 10.0.0.1 ping statistics --- 00:32:23.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:23.315 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.315 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:23.316 20:20:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:24.249 Waiting for block devices as requested 00:32:24.249 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:24.507 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:24.507 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:24.765 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:24.765 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:24.765 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:24.765 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.765 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:25.024 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:25.024 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:25.024 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:25.283 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:25.283 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:25.283 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:25.283 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:25.540 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:25.540 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:25.540 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:25.540 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:25.540 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:25.540 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:25.540 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:25.541 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:25.541 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:25.541 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:25.541 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:25.798 No valid GPT data, bailing 00:32:25.798 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:25.798 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:25.798 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:25.798 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:25.798 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:25.799 00:32:25.799 Discovery Log Number of Records 2, Generation counter 2 00:32:25.799 =====Discovery Log Entry 0====== 00:32:25.799 trtype: tcp 00:32:25.799 adrfam: ipv4 00:32:25.799 subtype: current discovery subsystem 00:32:25.799 treq: not specified, sq flow control disable supported 00:32:25.799 portid: 1 00:32:25.799 trsvcid: 4420 00:32:25.799 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:25.799 traddr: 10.0.0.1 00:32:25.799 eflags: none 00:32:25.799 sectype: none 00:32:25.799 =====Discovery Log Entry 1====== 00:32:25.799 trtype: tcp 00:32:25.799 adrfam: ipv4 00:32:25.799 subtype: nvme subsystem 00:32:25.799 treq: not specified, sq flow control disable supported 00:32:25.799 portid: 1 00:32:25.799 trsvcid: 4420 00:32:25.799 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:25.799 traddr: 10.0.0.1 00:32:25.799 eflags: none 00:32:25.799 sectype: none 00:32:25.799 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:25.799 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:25.799 EAL: No free 2048 kB hugepages reported on node 1 00:32:25.799 ===================================================== 00:32:25.799 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:25.799 ===================================================== 00:32:25.799 Controller Capabilities/Features 00:32:25.799 ================================ 00:32:25.799 Vendor ID: 0000 00:32:25.799 Subsystem Vendor ID: 0000 00:32:25.799 Serial Number: 777ce409ca093df141f5 00:32:25.799 Model Number: Linux 00:32:25.799 Firmware Version: 6.7.0-68 00:32:25.799 Recommended Arb Burst: 0 00:32:25.799 IEEE OUI Identifier: 00 00 00 00:32:25.799 Multi-path I/O 00:32:25.799 May have multiple subsystem ports: No 00:32:25.799 May have multiple controllers: No 00:32:25.799 Associated with SR-IOV VF: No 00:32:25.799 Max Data Transfer Size: Unlimited 00:32:25.799 Max Number of Namespaces: 0 00:32:25.799 Max Number of I/O Queues: 1024 00:32:25.799 NVMe Specification Version (VS): 1.3 00:32:25.799 NVMe Specification Version (Identify): 1.3 00:32:25.799 Maximum Queue Entries: 1024 00:32:25.799 Contiguous Queues Required: No 00:32:25.799 Arbitration Mechanisms Supported 00:32:25.799 Weighted Round Robin: Not Supported 00:32:25.799 Vendor Specific: Not Supported 00:32:25.799 Reset Timeout: 7500 ms 00:32:25.799 Doorbell Stride: 4 bytes 00:32:25.799 NVM Subsystem Reset: Not Supported 00:32:25.799 Command Sets Supported 00:32:25.799 NVM Command Set: Supported 00:32:25.799 Boot Partition: Not Supported 00:32:25.799 Memory Page Size Minimum: 4096 bytes 00:32:25.799 Memory Page Size Maximum: 4096 bytes 00:32:25.799 Persistent Memory Region: Not Supported 00:32:25.799 Optional Asynchronous Events Supported 00:32:25.799 Namespace Attribute Notices: Not Supported 00:32:25.799 Firmware Activation Notices: Not Supported 00:32:25.799 ANA Change Notices: Not Supported 00:32:25.799 PLE Aggregate Log Change Notices: Not Supported 00:32:25.799 LBA Status Info Alert Notices: Not Supported 00:32:25.799 EGE Aggregate Log Change Notices: Not Supported 00:32:25.799 Normal NVM Subsystem Shutdown event: Not Supported 00:32:25.799 Zone Descriptor Change Notices: Not Supported 00:32:25.799 Discovery Log Change Notices: Supported 00:32:25.799 Controller Attributes 00:32:25.799 128-bit Host Identifier: Not Supported 00:32:25.799 Non-Operational Permissive Mode: Not Supported 00:32:25.799 NVM Sets: Not Supported 00:32:25.799 Read Recovery Levels: Not Supported 00:32:25.799 Endurance Groups: Not Supported 00:32:25.799 Predictable Latency Mode: Not Supported 00:32:25.799 Traffic Based Keep ALive: Not Supported 00:32:25.799 Namespace Granularity: Not Supported 00:32:25.799 SQ Associations: Not Supported 00:32:25.799 UUID List: Not Supported 00:32:25.799 Multi-Domain Subsystem: Not Supported 00:32:25.799 Fixed Capacity Management: Not Supported 00:32:25.799 Variable Capacity Management: Not Supported 00:32:25.799 Delete Endurance Group: Not Supported 00:32:25.799 Delete NVM Set: Not Supported 00:32:25.799 Extended LBA Formats Supported: Not Supported 00:32:25.799 Flexible Data Placement Supported: Not Supported 00:32:25.799 00:32:25.799 Controller Memory Buffer Support 00:32:25.799 ================================ 00:32:25.799 Supported: No 00:32:25.799 00:32:25.799 Persistent Memory Region Support 00:32:25.799 ================================ 00:32:25.799 Supported: No 00:32:25.799 00:32:25.799 Admin Command Set Attributes 00:32:25.799 ============================ 00:32:25.799 Security Send/Receive: Not Supported 00:32:25.799 Format NVM: Not Supported 00:32:25.799 Firmware Activate/Download: Not Supported 00:32:25.799 Namespace Management: Not Supported 00:32:25.799 Device Self-Test: Not Supported 00:32:25.799 Directives: Not Supported 00:32:25.799 NVMe-MI: Not Supported 00:32:25.799 Virtualization Management: Not Supported 00:32:25.799 Doorbell Buffer Config: Not Supported 00:32:25.799 Get LBA Status Capability: Not Supported 00:32:25.799 Command & Feature Lockdown Capability: Not Supported 00:32:25.799 Abort Command Limit: 1 00:32:25.799 Async Event Request Limit: 1 00:32:25.799 Number of Firmware Slots: N/A 00:32:25.799 Firmware Slot 1 Read-Only: N/A 00:32:25.799 Firmware Activation Without Reset: N/A 00:32:25.799 Multiple Update Detection Support: N/A 00:32:25.799 Firmware Update Granularity: No Information Provided 00:32:25.799 Per-Namespace SMART Log: No 00:32:25.799 Asymmetric Namespace Access Log Page: Not Supported 00:32:25.799 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:25.799 Command Effects Log Page: Not Supported 00:32:25.799 Get Log Page Extended Data: Supported 00:32:25.799 Telemetry Log Pages: Not Supported 00:32:25.799 Persistent Event Log Pages: Not Supported 00:32:25.799 Supported Log Pages Log Page: May Support 00:32:25.799 Commands Supported & Effects Log Page: Not Supported 00:32:25.799 Feature Identifiers & Effects Log Page:May Support 00:32:25.799 NVMe-MI Commands & Effects Log Page: May Support 00:32:25.799 Data Area 4 for Telemetry Log: Not Supported 00:32:25.799 Error Log Page Entries Supported: 1 00:32:25.799 Keep Alive: Not Supported 00:32:25.799 00:32:25.799 NVM Command Set Attributes 00:32:25.799 ========================== 00:32:25.799 Submission Queue Entry Size 00:32:25.799 Max: 1 00:32:25.799 Min: 1 00:32:25.799 Completion Queue Entry Size 00:32:25.799 Max: 1 00:32:25.799 Min: 1 00:32:25.799 Number of Namespaces: 0 00:32:25.799 Compare Command: Not Supported 00:32:25.799 Write Uncorrectable Command: Not Supported 00:32:25.799 Dataset Management Command: Not Supported 00:32:25.799 Write Zeroes Command: Not Supported 00:32:25.799 Set Features Save Field: Not Supported 00:32:25.799 Reservations: Not Supported 00:32:25.799 Timestamp: Not Supported 00:32:25.799 Copy: Not Supported 00:32:25.799 Volatile Write Cache: Not Present 00:32:25.799 Atomic Write Unit (Normal): 1 00:32:25.799 Atomic Write Unit (PFail): 1 00:32:25.799 Atomic Compare & Write Unit: 1 00:32:25.799 Fused Compare & Write: Not Supported 00:32:25.799 Scatter-Gather List 00:32:25.799 SGL Command Set: Supported 00:32:25.799 SGL Keyed: Not Supported 00:32:25.799 SGL Bit Bucket Descriptor: Not Supported 00:32:25.799 SGL Metadata Pointer: Not Supported 00:32:25.799 Oversized SGL: Not Supported 00:32:25.799 SGL Metadata Address: Not Supported 00:32:25.799 SGL Offset: Supported 00:32:25.799 Transport SGL Data Block: Not Supported 00:32:25.799 Replay Protected Memory Block: Not Supported 00:32:25.799 00:32:25.799 Firmware Slot Information 00:32:25.799 ========================= 00:32:25.799 Active slot: 0 00:32:25.799 00:32:25.799 00:32:25.799 Error Log 00:32:25.799 ========= 00:32:25.799 00:32:25.799 Active Namespaces 00:32:25.799 ================= 00:32:25.799 Discovery Log Page 00:32:25.799 ================== 00:32:25.799 Generation Counter: 2 00:32:25.799 Number of Records: 2 00:32:25.799 Record Format: 0 00:32:25.799 00:32:25.799 Discovery Log Entry 0 00:32:25.799 ---------------------- 00:32:25.799 Transport Type: 3 (TCP) 00:32:25.799 Address Family: 1 (IPv4) 00:32:25.799 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:25.799 Entry Flags: 00:32:25.800 Duplicate Returned Information: 0 00:32:25.800 Explicit Persistent Connection Support for Discovery: 0 00:32:25.800 Transport Requirements: 00:32:25.800 Secure Channel: Not Specified 00:32:25.800 Port ID: 1 (0x0001) 00:32:25.800 Controller ID: 65535 (0xffff) 00:32:25.800 Admin Max SQ Size: 32 00:32:25.800 Transport Service Identifier: 4420 00:32:25.800 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:25.800 Transport Address: 10.0.0.1 00:32:25.800 Discovery Log Entry 1 00:32:25.800 ---------------------- 00:32:25.800 Transport Type: 3 (TCP) 00:32:25.800 Address Family: 1 (IPv4) 00:32:25.800 Subsystem Type: 2 (NVM Subsystem) 00:32:25.800 Entry Flags: 00:32:25.800 Duplicate Returned Information: 0 00:32:25.800 Explicit Persistent Connection Support for Discovery: 0 00:32:25.800 Transport Requirements: 00:32:25.800 Secure Channel: Not Specified 00:32:25.800 Port ID: 1 (0x0001) 00:32:25.800 Controller ID: 65535 (0xffff) 00:32:25.800 Admin Max SQ Size: 32 00:32:25.800 Transport Service Identifier: 4420 00:32:25.800 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:25.800 Transport Address: 10.0.0.1 00:32:25.800 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:26.058 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.058 get_feature(0x01) failed 00:32:26.058 get_feature(0x02) failed 00:32:26.058 get_feature(0x04) failed 00:32:26.058 ===================================================== 00:32:26.058 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:26.058 ===================================================== 00:32:26.058 Controller Capabilities/Features 00:32:26.058 ================================ 00:32:26.058 Vendor ID: 0000 00:32:26.058 Subsystem Vendor ID: 0000 00:32:26.058 Serial Number: f24ccf5370c7dbdba4ce 00:32:26.058 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:26.058 Firmware Version: 6.7.0-68 00:32:26.058 Recommended Arb Burst: 6 00:32:26.058 IEEE OUI Identifier: 00 00 00 00:32:26.058 Multi-path I/O 00:32:26.058 May have multiple subsystem ports: Yes 00:32:26.058 May have multiple controllers: Yes 00:32:26.058 Associated with SR-IOV VF: No 00:32:26.058 Max Data Transfer Size: Unlimited 00:32:26.058 Max Number of Namespaces: 1024 00:32:26.058 Max Number of I/O Queues: 128 00:32:26.058 NVMe Specification Version (VS): 1.3 00:32:26.058 NVMe Specification Version (Identify): 1.3 00:32:26.058 Maximum Queue Entries: 1024 00:32:26.058 Contiguous Queues Required: No 00:32:26.058 Arbitration Mechanisms Supported 00:32:26.058 Weighted Round Robin: Not Supported 00:32:26.058 Vendor Specific: Not Supported 00:32:26.058 Reset Timeout: 7500 ms 00:32:26.058 Doorbell Stride: 4 bytes 00:32:26.058 NVM Subsystem Reset: Not Supported 00:32:26.058 Command Sets Supported 00:32:26.058 NVM Command Set: Supported 00:32:26.058 Boot Partition: Not Supported 00:32:26.058 Memory Page Size Minimum: 4096 bytes 00:32:26.058 Memory Page Size Maximum: 4096 bytes 00:32:26.058 Persistent Memory Region: Not Supported 00:32:26.058 Optional Asynchronous Events Supported 00:32:26.058 Namespace Attribute Notices: Supported 00:32:26.058 Firmware Activation Notices: Not Supported 00:32:26.058 ANA Change Notices: Supported 00:32:26.058 PLE Aggregate Log Change Notices: Not Supported 00:32:26.058 LBA Status Info Alert Notices: Not Supported 00:32:26.058 EGE Aggregate Log Change Notices: Not Supported 00:32:26.058 Normal NVM Subsystem Shutdown event: Not Supported 00:32:26.058 Zone Descriptor Change Notices: Not Supported 00:32:26.058 Discovery Log Change Notices: Not Supported 00:32:26.058 Controller Attributes 00:32:26.058 128-bit Host Identifier: Supported 00:32:26.058 Non-Operational Permissive Mode: Not Supported 00:32:26.058 NVM Sets: Not Supported 00:32:26.058 Read Recovery Levels: Not Supported 00:32:26.058 Endurance Groups: Not Supported 00:32:26.058 Predictable Latency Mode: Not Supported 00:32:26.058 Traffic Based Keep ALive: Supported 00:32:26.058 Namespace Granularity: Not Supported 00:32:26.058 SQ Associations: Not Supported 00:32:26.058 UUID List: Not Supported 00:32:26.058 Multi-Domain Subsystem: Not Supported 00:32:26.058 Fixed Capacity Management: Not Supported 00:32:26.058 Variable Capacity Management: Not Supported 00:32:26.058 Delete Endurance Group: Not Supported 00:32:26.058 Delete NVM Set: Not Supported 00:32:26.058 Extended LBA Formats Supported: Not Supported 00:32:26.058 Flexible Data Placement Supported: Not Supported 00:32:26.058 00:32:26.058 Controller Memory Buffer Support 00:32:26.058 ================================ 00:32:26.058 Supported: No 00:32:26.058 00:32:26.058 Persistent Memory Region Support 00:32:26.058 ================================ 00:32:26.058 Supported: No 00:32:26.058 00:32:26.058 Admin Command Set Attributes 00:32:26.058 ============================ 00:32:26.058 Security Send/Receive: Not Supported 00:32:26.058 Format NVM: Not Supported 00:32:26.058 Firmware Activate/Download: Not Supported 00:32:26.058 Namespace Management: Not Supported 00:32:26.058 Device Self-Test: Not Supported 00:32:26.058 Directives: Not Supported 00:32:26.058 NVMe-MI: Not Supported 00:32:26.058 Virtualization Management: Not Supported 00:32:26.058 Doorbell Buffer Config: Not Supported 00:32:26.058 Get LBA Status Capability: Not Supported 00:32:26.058 Command & Feature Lockdown Capability: Not Supported 00:32:26.058 Abort Command Limit: 4 00:32:26.058 Async Event Request Limit: 4 00:32:26.058 Number of Firmware Slots: N/A 00:32:26.058 Firmware Slot 1 Read-Only: N/A 00:32:26.058 Firmware Activation Without Reset: N/A 00:32:26.058 Multiple Update Detection Support: N/A 00:32:26.058 Firmware Update Granularity: No Information Provided 00:32:26.058 Per-Namespace SMART Log: Yes 00:32:26.058 Asymmetric Namespace Access Log Page: Supported 00:32:26.058 ANA Transition Time : 10 sec 00:32:26.058 00:32:26.058 Asymmetric Namespace Access Capabilities 00:32:26.058 ANA Optimized State : Supported 00:32:26.058 ANA Non-Optimized State : Supported 00:32:26.058 ANA Inaccessible State : Supported 00:32:26.058 ANA Persistent Loss State : Supported 00:32:26.058 ANA Change State : Supported 00:32:26.058 ANAGRPID is not changed : No 00:32:26.058 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:26.058 00:32:26.058 ANA Group Identifier Maximum : 128 00:32:26.058 Number of ANA Group Identifiers : 128 00:32:26.058 Max Number of Allowed Namespaces : 1024 00:32:26.058 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:26.058 Command Effects Log Page: Supported 00:32:26.058 Get Log Page Extended Data: Supported 00:32:26.058 Telemetry Log Pages: Not Supported 00:32:26.058 Persistent Event Log Pages: Not Supported 00:32:26.058 Supported Log Pages Log Page: May Support 00:32:26.058 Commands Supported & Effects Log Page: Not Supported 00:32:26.058 Feature Identifiers & Effects Log Page:May Support 00:32:26.058 NVMe-MI Commands & Effects Log Page: May Support 00:32:26.058 Data Area 4 for Telemetry Log: Not Supported 00:32:26.058 Error Log Page Entries Supported: 128 00:32:26.058 Keep Alive: Supported 00:32:26.058 Keep Alive Granularity: 1000 ms 00:32:26.058 00:32:26.058 NVM Command Set Attributes 00:32:26.058 ========================== 00:32:26.058 Submission Queue Entry Size 00:32:26.058 Max: 64 00:32:26.058 Min: 64 00:32:26.058 Completion Queue Entry Size 00:32:26.058 Max: 16 00:32:26.058 Min: 16 00:32:26.058 Number of Namespaces: 1024 00:32:26.058 Compare Command: Not Supported 00:32:26.058 Write Uncorrectable Command: Not Supported 00:32:26.058 Dataset Management Command: Supported 00:32:26.058 Write Zeroes Command: Supported 00:32:26.058 Set Features Save Field: Not Supported 00:32:26.058 Reservations: Not Supported 00:32:26.058 Timestamp: Not Supported 00:32:26.058 Copy: Not Supported 00:32:26.058 Volatile Write Cache: Present 00:32:26.058 Atomic Write Unit (Normal): 1 00:32:26.058 Atomic Write Unit (PFail): 1 00:32:26.058 Atomic Compare & Write Unit: 1 00:32:26.058 Fused Compare & Write: Not Supported 00:32:26.058 Scatter-Gather List 00:32:26.058 SGL Command Set: Supported 00:32:26.058 SGL Keyed: Not Supported 00:32:26.058 SGL Bit Bucket Descriptor: Not Supported 00:32:26.058 SGL Metadata Pointer: Not Supported 00:32:26.058 Oversized SGL: Not Supported 00:32:26.058 SGL Metadata Address: Not Supported 00:32:26.058 SGL Offset: Supported 00:32:26.058 Transport SGL Data Block: Not Supported 00:32:26.058 Replay Protected Memory Block: Not Supported 00:32:26.058 00:32:26.058 Firmware Slot Information 00:32:26.058 ========================= 00:32:26.058 Active slot: 0 00:32:26.058 00:32:26.058 Asymmetric Namespace Access 00:32:26.058 =========================== 00:32:26.058 Change Count : 0 00:32:26.058 Number of ANA Group Descriptors : 1 00:32:26.058 ANA Group Descriptor : 0 00:32:26.058 ANA Group ID : 1 00:32:26.058 Number of NSID Values : 1 00:32:26.058 Change Count : 0 00:32:26.058 ANA State : 1 00:32:26.058 Namespace Identifier : 1 00:32:26.058 00:32:26.059 Commands Supported and Effects 00:32:26.059 ============================== 00:32:26.059 Admin Commands 00:32:26.059 -------------- 00:32:26.059 Get Log Page (02h): Supported 00:32:26.059 Identify (06h): Supported 00:32:26.059 Abort (08h): Supported 00:32:26.059 Set Features (09h): Supported 00:32:26.059 Get Features (0Ah): Supported 00:32:26.059 Asynchronous Event Request (0Ch): Supported 00:32:26.059 Keep Alive (18h): Supported 00:32:26.059 I/O Commands 00:32:26.059 ------------ 00:32:26.059 Flush (00h): Supported 00:32:26.059 Write (01h): Supported LBA-Change 00:32:26.059 Read (02h): Supported 00:32:26.059 Write Zeroes (08h): Supported LBA-Change 00:32:26.059 Dataset Management (09h): Supported 00:32:26.059 00:32:26.059 Error Log 00:32:26.059 ========= 00:32:26.059 Entry: 0 00:32:26.059 Error Count: 0x3 00:32:26.059 Submission Queue Id: 0x0 00:32:26.059 Command Id: 0x5 00:32:26.059 Phase Bit: 0 00:32:26.059 Status Code: 0x2 00:32:26.059 Status Code Type: 0x0 00:32:26.059 Do Not Retry: 1 00:32:26.059 Error Location: 0x28 00:32:26.059 LBA: 0x0 00:32:26.059 Namespace: 0x0 00:32:26.059 Vendor Log Page: 0x0 00:32:26.059 ----------- 00:32:26.059 Entry: 1 00:32:26.059 Error Count: 0x2 00:32:26.059 Submission Queue Id: 0x0 00:32:26.059 Command Id: 0x5 00:32:26.059 Phase Bit: 0 00:32:26.059 Status Code: 0x2 00:32:26.059 Status Code Type: 0x0 00:32:26.059 Do Not Retry: 1 00:32:26.059 Error Location: 0x28 00:32:26.059 LBA: 0x0 00:32:26.059 Namespace: 0x0 00:32:26.059 Vendor Log Page: 0x0 00:32:26.059 ----------- 00:32:26.059 Entry: 2 00:32:26.059 Error Count: 0x1 00:32:26.059 Submission Queue Id: 0x0 00:32:26.059 Command Id: 0x4 00:32:26.059 Phase Bit: 0 00:32:26.059 Status Code: 0x2 00:32:26.059 Status Code Type: 0x0 00:32:26.059 Do Not Retry: 1 00:32:26.059 Error Location: 0x28 00:32:26.059 LBA: 0x0 00:32:26.059 Namespace: 0x0 00:32:26.059 Vendor Log Page: 0x0 00:32:26.059 00:32:26.059 Number of Queues 00:32:26.059 ================ 00:32:26.059 Number of I/O Submission Queues: 128 00:32:26.059 Number of I/O Completion Queues: 128 00:32:26.059 00:32:26.059 ZNS Specific Controller Data 00:32:26.059 ============================ 00:32:26.059 Zone Append Size Limit: 0 00:32:26.059 00:32:26.059 00:32:26.059 Active Namespaces 00:32:26.059 ================= 00:32:26.059 get_feature(0x05) failed 00:32:26.059 Namespace ID:1 00:32:26.059 Command Set Identifier: NVM (00h) 00:32:26.059 Deallocate: Supported 00:32:26.059 Deallocated/Unwritten Error: Not Supported 00:32:26.059 Deallocated Read Value: Unknown 00:32:26.059 Deallocate in Write Zeroes: Not Supported 00:32:26.059 Deallocated Guard Field: 0xFFFF 00:32:26.059 Flush: Supported 00:32:26.059 Reservation: Not Supported 00:32:26.059 Namespace Sharing Capabilities: Multiple Controllers 00:32:26.059 Size (in LBAs): 1953525168 (931GiB) 00:32:26.059 Capacity (in LBAs): 1953525168 (931GiB) 00:32:26.059 Utilization (in LBAs): 1953525168 (931GiB) 00:32:26.059 UUID: d633f996-167d-452a-9ec8-3f0e78ac393c 00:32:26.059 Thin Provisioning: Not Supported 00:32:26.059 Per-NS Atomic Units: Yes 00:32:26.059 Atomic Boundary Size (Normal): 0 00:32:26.059 Atomic Boundary Size (PFail): 0 00:32:26.059 Atomic Boundary Offset: 0 00:32:26.059 NGUID/EUI64 Never Reused: No 00:32:26.059 ANA group ID: 1 00:32:26.059 Namespace Write Protected: No 00:32:26.059 Number of LBA Formats: 1 00:32:26.059 Current LBA Format: LBA Format #00 00:32:26.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:26.059 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:26.059 rmmod nvme_tcp 00:32:26.059 rmmod nvme_fabrics 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:26.059 20:20:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:27.963 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:28.222 20:20:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:29.164 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:29.164 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:29.164 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:29.164 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:29.164 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:29.470 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:29.470 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:29.470 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:29.470 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:30.429 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:30.429 00:32:30.429 real 0m9.446s 00:32:30.429 user 0m1.986s 00:32:30.429 sys 0m3.380s 00:32:30.429 20:20:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:30.429 20:20:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:30.429 ************************************ 00:32:30.429 END TEST nvmf_identify_kernel_target 00:32:30.429 ************************************ 00:32:30.429 20:20:17 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:30.429 20:20:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:30.429 20:20:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:30.429 20:20:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:30.429 ************************************ 00:32:30.429 START TEST nvmf_auth_host 00:32:30.429 ************************************ 00:32:30.429 20:20:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:30.429 * Looking for test storage... 00:32:30.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:30.429 20:20:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:30.430 20:20:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:32.332 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:32.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:32.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:32.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:32.333 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:32.333 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:32.591 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.591 20:20:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:32.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:32:32.591 00:32:32.591 --- 10.0.0.2 ping statistics --- 00:32:32.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.591 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:32:32.591 00:32:32.591 --- 10.0.0.1 ping statistics --- 00:32:32.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.591 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:32.591 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3333342 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3333342 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3333342 ']' 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:32.592 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cc42ec8f81777615d98d19d9ddbf60ae 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lhq 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cc42ec8f81777615d98d19d9ddbf60ae 0 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cc42ec8f81777615d98d19d9ddbf60ae 0 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cc42ec8f81777615d98d19d9ddbf60ae 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:32.850 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lhq 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lhq 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.lhq 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c36782e88d585a4e251d7e079805dcd37827728a6b6a758fa2864936d4ac9aaf 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8Qx 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c36782e88d585a4e251d7e079805dcd37827728a6b6a758fa2864936d4ac9aaf 3 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c36782e88d585a4e251d7e079805dcd37827728a6b6a758fa2864936d4ac9aaf 3 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c36782e88d585a4e251d7e079805dcd37827728a6b6a758fa2864936d4ac9aaf 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8Qx 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8Qx 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.8Qx 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f330b3ba6856259232695f554655a94af2607062b8145b27 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YbV 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f330b3ba6856259232695f554655a94af2607062b8145b27 0 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f330b3ba6856259232695f554655a94af2607062b8145b27 0 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f330b3ba6856259232695f554655a94af2607062b8145b27 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YbV 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YbV 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.YbV 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab01471bb3f08f4157a1235b569b6936f4bef0e2573fa318 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Wor 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab01471bb3f08f4157a1235b569b6936f4bef0e2573fa318 2 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab01471bb3f08f4157a1235b569b6936f4bef0e2573fa318 2 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab01471bb3f08f4157a1235b569b6936f4bef0e2573fa318 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Wor 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Wor 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Wor 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7ff43da13c0ab0d9677ab0874941da0d 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:33.109 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.t1n 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7ff43da13c0ab0d9677ab0874941da0d 1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7ff43da13c0ab0d9677ab0874941da0d 1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7ff43da13c0ab0d9677ab0874941da0d 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.t1n 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.t1n 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.t1n 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d8698cfd8c6459bf17c2aa8f1fc91087 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.y7h 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d8698cfd8c6459bf17c2aa8f1fc91087 1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d8698cfd8c6459bf17c2aa8f1fc91087 1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d8698cfd8c6459bf17c2aa8f1fc91087 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:33.110 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.y7h 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.y7h 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.y7h 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e6d155b7a29e21c8e3bb092c375f1e903773815b506e82ca 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.r8W 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e6d155b7a29e21c8e3bb092c375f1e903773815b506e82ca 2 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e6d155b7a29e21c8e3bb092c375f1e903773815b506e82ca 2 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e6d155b7a29e21c8e3bb092c375f1e903773815b506e82ca 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.r8W 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.r8W 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.r8W 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9051bc805844eb24b0670ebf1f2c46f6 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.f4s 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9051bc805844eb24b0670ebf1f2c46f6 0 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9051bc805844eb24b0670ebf1f2c46f6 0 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9051bc805844eb24b0670ebf1f2c46f6 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.f4s 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.f4s 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.f4s 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=985b0c89c1147cf2db8a7e80195833443c2f3aa9e2333a3bcb54526544274fc8 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.DfD 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 985b0c89c1147cf2db8a7e80195833443c2f3aa9e2333a3bcb54526544274fc8 3 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 985b0c89c1147cf2db8a7e80195833443c2f3aa9e2333a3bcb54526544274fc8 3 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=985b0c89c1147cf2db8a7e80195833443c2f3aa9e2333a3bcb54526544274fc8 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.DfD 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.DfD 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DfD 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:33.370 20:20:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3333342 00:32:33.371 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 3333342 ']' 00:32:33.371 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:33.371 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:33.371 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:33.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:33.371 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:33.371 20:20:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.lhq 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.8Qx ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Qx 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.YbV 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Wor ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Wor 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.t1n 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.y7h ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y7h 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.r8W 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.f4s ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.f4s 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DfD 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.630 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:33.631 20:20:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:35.005 Waiting for block devices as requested 00:32:35.005 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:35.005 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.005 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.005 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.263 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.263 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.263 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:35.263 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:35.522 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:35.522 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:35.522 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:35.522 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:35.781 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:35.781 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:35.781 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:36.039 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:36.039 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:36.607 20:20:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:36.607 No valid GPT data, bailing 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:36.607 00:32:36.607 Discovery Log Number of Records 2, Generation counter 2 00:32:36.607 =====Discovery Log Entry 0====== 00:32:36.607 trtype: tcp 00:32:36.607 adrfam: ipv4 00:32:36.607 subtype: current discovery subsystem 00:32:36.607 treq: not specified, sq flow control disable supported 00:32:36.607 portid: 1 00:32:36.607 trsvcid: 4420 00:32:36.607 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:36.607 traddr: 10.0.0.1 00:32:36.607 eflags: none 00:32:36.607 sectype: none 00:32:36.607 =====Discovery Log Entry 1====== 00:32:36.607 trtype: tcp 00:32:36.607 adrfam: ipv4 00:32:36.607 subtype: nvme subsystem 00:32:36.607 treq: not specified, sq flow control disable supported 00:32:36.607 portid: 1 00:32:36.607 trsvcid: 4420 00:32:36.607 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:36.607 traddr: 10.0.0.1 00:32:36.607 eflags: none 00:32:36.607 sectype: none 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:36.607 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.608 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.868 nvme0n1 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.868 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.127 nvme0n1 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.127 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.128 nvme0n1 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.128 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.386 nvme0n1 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.386 20:20:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.386 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.646 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.647 nvme0n1 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.647 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.906 nvme0n1 00:32:37.906 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.906 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.906 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.907 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.165 nvme0n1 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.165 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.166 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.426 nvme0n1 00:32:38.426 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.426 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.426 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.426 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.426 20:20:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.426 20:20:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.426 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.685 nvme0n1 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:38.685 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.686 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.945 nvme0n1 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.945 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.204 nvme0n1 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.204 20:20:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.462 nvme0n1 00:32:39.462 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.462 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.462 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.462 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.462 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.462 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.722 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.982 nvme0n1 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.982 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.983 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.983 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.983 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.983 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.983 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.243 nvme0n1 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.243 20:20:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.811 nvme0n1 00:32:40.811 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.811 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.811 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.811 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.812 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.071 nvme0n1 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.071 20:20:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.639 nvme0n1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.639 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.207 nvme0n1 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.207 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.208 20:20:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.776 nvme0n1 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.776 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.370 nvme0n1 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.370 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.371 20:20:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.938 nvme0n1 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.938 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.939 20:20:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.875 nvme0n1 00:32:44.875 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.875 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.875 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.875 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.875 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.875 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.133 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.134 20:20:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.073 nvme0n1 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.073 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.074 20:20:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.026 nvme0n1 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.027 20:20:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.965 nvme0n1 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:47.965 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.966 20:20:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.903 nvme0n1 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:48.903 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.904 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.162 nvme0n1 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.162 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.163 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.421 nvme0n1 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.421 20:20:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.679 nvme0n1 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.679 nvme0n1 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.679 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.937 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.938 nvme0n1 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.938 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.197 nvme0n1 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.197 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.198 20:20:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.456 nvme0n1 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.456 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.716 nvme0n1 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.716 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 nvme0n1 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.975 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.233 nvme0n1 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:51.233 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.234 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.492 20:20:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.751 nvme0n1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.751 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.010 nvme0n1 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.010 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.011 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.011 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.011 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.011 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.268 nvme0n1 00:32:52.268 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.268 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.268 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.268 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.268 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.268 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.528 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.529 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.529 20:20:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.529 20:20:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.529 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.529 20:20:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.789 nvme0n1 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.789 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 nvme0n1 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.050 20:20:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.619 nvme0n1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.619 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.188 nvme0n1 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.188 20:20:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.758 nvme0n1 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.758 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.018 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.589 nvme0n1 00:32:55.589 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.589 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.589 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.589 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.589 20:20:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.589 20:20:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.589 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.154 nvme0n1 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.154 20:20:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.143 nvme0n1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.143 20:20:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.079 nvme0n1 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.079 20:20:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.017 nvme0n1 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:59.017 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.018 20:20:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.955 nvme0n1 00:32:59.955 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.955 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.955 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.955 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.955 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.955 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.214 20:20:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.151 nvme0n1 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.151 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 nvme0n1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.412 20:20:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 nvme0n1 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 nvme0n1 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.671 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.929 nvme0n1 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.929 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:02.188 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 nvme0n1 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.189 20:20:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.448 nvme0n1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.448 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.706 nvme0n1 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.706 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.964 nvme0n1 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.964 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.965 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.965 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.965 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.225 nvme0n1 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.225 20:20:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 nvme0n1 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.484 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.744 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.003 nvme0n1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.003 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.263 nvme0n1 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.263 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.264 20:20:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.523 nvme0n1 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.523 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.782 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.041 nvme0n1 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.041 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.042 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 nvme0n1 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.301 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.302 20:20:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.869 nvme0n1 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.869 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.870 20:20:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.437 nvme0n1 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.437 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.694 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.695 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.259 nvme0n1 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:07.259 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.260 20:20:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.828 nvme0n1 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.828 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.394 nvme0n1 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2M0MmVjOGY4MTc3NzYxNWQ5OGQxOWQ5ZGRiZjYwYWXgLBME: 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: ]] 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzM2NzgyZTg4ZDU4NWE0ZTI1MWQ3ZTA3OTgwNWRjZDM3ODI3NzI4YTZiNmE3NThmYTI4NjQ5MzZkNGFjOWFhZqkG8Xo=: 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.394 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.395 20:20:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.332 nvme0n1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.332 20:20:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.271 nvme0n1 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZmNDNkYTEzYzBhYjBkOTY3N2FiMDg3NDk0MWRhMGS5sNkd: 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDg2OThjZmQ4YzY0NTliZjE3YzJhYThmMWZjOTEwODfkDx0v: 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.271 20:20:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.232 nvme0n1 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.232 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTZkMTU1YjdhMjllMjFjOGUzYmIwOTJjMzc1ZjFlOTAzNzczODE1YjUwNmU4MmNh3eWO/Q==: 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: ]] 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTA1MWJjODA1ODQ0ZWIyNGIwNjcwZWJmMWYyYzQ2Zjbm0xw4: 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.490 20:20:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 nvme0n1 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTg1YjBjODljMTE0N2NmMmRiOGE3ZTgwMTk1ODMzNDQzYzJmM2FhOWUyMzMzYTNiY2I1NDUyNjU0NDI3NGZjOObRG+g=: 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.429 20:20:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.365 nvme0n1 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjMzMGIzYmE2ODU2MjU5MjMyNjk1ZjU1NDY1NWE5NGFmMjYwNzA2MmI4MTQ1YjI3RU4PVg==: 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YWIwMTQ3MWJiM2YwOGY0MTU3YTEyMzViNTY5YjY5MzZmNGJlZjBlMjU3M2ZhMzE43lJGLQ==: 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.365 20:21:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.624 request: 00:33:13.624 { 00:33:13.624 "name": "nvme0", 00:33:13.624 "trtype": "tcp", 00:33:13.624 "traddr": "10.0.0.1", 00:33:13.624 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:13.624 "adrfam": "ipv4", 00:33:13.624 "trsvcid": "4420", 00:33:13.624 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:13.624 "method": "bdev_nvme_attach_controller", 00:33:13.624 "req_id": 1 00:33:13.624 } 00:33:13.624 Got JSON-RPC error response 00:33:13.624 response: 00:33:13.624 { 00:33:13.624 "code": -5, 00:33:13.624 "message": "Input/output error" 00:33:13.624 } 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.624 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.625 request: 00:33:13.625 { 00:33:13.625 "name": "nvme0", 00:33:13.625 "trtype": "tcp", 00:33:13.625 "traddr": "10.0.0.1", 00:33:13.625 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:13.625 "adrfam": "ipv4", 00:33:13.625 "trsvcid": "4420", 00:33:13.625 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:13.625 "dhchap_key": "key2", 00:33:13.625 "method": "bdev_nvme_attach_controller", 00:33:13.625 "req_id": 1 00:33:13.625 } 00:33:13.625 Got JSON-RPC error response 00:33:13.625 response: 00:33:13.625 { 00:33:13.625 "code": -5, 00:33:13.625 "message": "Input/output error" 00:33:13.625 } 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.625 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.883 request: 00:33:13.883 { 00:33:13.883 "name": "nvme0", 00:33:13.883 "trtype": "tcp", 00:33:13.883 "traddr": "10.0.0.1", 00:33:13.883 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:13.883 "adrfam": "ipv4", 00:33:13.883 "trsvcid": "4420", 00:33:13.883 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:13.883 "dhchap_key": "key1", 00:33:13.883 "dhchap_ctrlr_key": "ckey2", 00:33:13.883 "method": "bdev_nvme_attach_controller", 00:33:13.883 "req_id": 1 00:33:13.883 } 00:33:13.884 Got JSON-RPC error response 00:33:13.884 response: 00:33:13.884 { 00:33:13.884 "code": -5, 00:33:13.884 "message": "Input/output error" 00:33:13.884 } 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:13.884 rmmod nvme_tcp 00:33:13.884 rmmod nvme_fabrics 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3333342 ']' 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3333342 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 3333342 ']' 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 3333342 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3333342 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3333342' 00:33:13.884 killing process with pid 3333342 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 3333342 00:33:13.884 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 3333342 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:14.144 20:21:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:16.054 20:21:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.434 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:17.434 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:17.434 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:18.366 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:18.366 20:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.lhq /tmp/spdk.key-null.YbV /tmp/spdk.key-sha256.t1n /tmp/spdk.key-sha384.r8W /tmp/spdk.key-sha512.DfD /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:18.366 20:21:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:19.741 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:19.741 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:19.741 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:19.741 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:19.741 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:19.741 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:19.741 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:19.741 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:19.741 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:19.741 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:19.741 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:19.741 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:19.741 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:19.741 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:19.741 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:19.741 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:19.741 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:19.741 00:33:19.741 real 0m49.217s 00:33:19.741 user 0m47.032s 00:33:19.741 sys 0m5.644s 00:33:19.741 20:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:19.741 20:21:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.741 ************************************ 00:33:19.741 END TEST nvmf_auth_host 00:33:19.741 ************************************ 00:33:19.741 20:21:07 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:19.741 20:21:07 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:19.741 20:21:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:19.741 20:21:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:19.741 20:21:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:19.741 ************************************ 00:33:19.741 START TEST nvmf_digest 00:33:19.741 ************************************ 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:19.741 * Looking for test storage... 00:33:19.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:19.741 20:21:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.645 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:21.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:21.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:21.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:21.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:21.646 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:21.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:33:21.905 00:33:21.905 --- 10.0.0.2 ping statistics --- 00:33:21.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.905 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:33:21.905 00:33:21.905 --- 10.0.0.1 ping statistics --- 00:33:21.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.905 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:21.905 ************************************ 00:33:21.905 START TEST nvmf_digest_clean 00:33:21.905 ************************************ 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3343315 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3343315 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3343315 ']' 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:21.905 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:21.905 [2024-07-13 20:21:09.443873] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:21.905 [2024-07-13 20:21:09.443962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.905 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.905 [2024-07-13 20:21:09.508771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.163 [2024-07-13 20:21:09.592474] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:22.163 [2024-07-13 20:21:09.592556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:22.163 [2024-07-13 20:21:09.592572] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:22.163 [2024-07-13 20:21:09.592583] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:22.163 [2024-07-13 20:21:09.592593] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:22.163 [2024-07-13 20:21:09.592633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.163 null0 00:33:22.163 [2024-07-13 20:21:09.778392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.163 [2024-07-13 20:21:09.802599] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:22.163 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3343337 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3343337 /var/tmp/bperf.sock 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3343337 ']' 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:22.164 20:21:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:22.422 [2024-07-13 20:21:09.851617] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:22.422 [2024-07-13 20:21:09.851692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343337 ] 00:33:22.422 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.422 [2024-07-13 20:21:09.911567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.422 [2024-07-13 20:21:09.997373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.422 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:22.422 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:22.422 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:22.422 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:22.422 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:22.991 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:22.991 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.251 nvme0n1 00:33:23.251 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:23.251 20:21:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.251 Running I/O for 2 seconds... 00:33:25.784 00:33:25.784 Latency(us) 00:33:25.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.784 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:25.784 nvme0n1 : 2.01 18872.26 73.72 0.00 0.00 6773.67 2827.76 18350.08 00:33:25.784 =================================================================================================================== 00:33:25.784 Total : 18872.26 73.72 0.00 0.00 6773.67 2827.76 18350.08 00:33:25.784 0 00:33:25.784 20:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:25.784 20:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:25.784 20:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:25.784 20:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:25.784 | select(.opcode=="crc32c") 00:33:25.784 | "\(.module_name) \(.executed)"' 00:33:25.784 20:21:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3343337 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3343337 ']' 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3343337 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3343337 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3343337' 00:33:25.784 killing process with pid 3343337 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3343337 00:33:25.784 Received shutdown signal, test time was about 2.000000 seconds 00:33:25.784 00:33:25.784 Latency(us) 00:33:25.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.784 =================================================================================================================== 00:33:25.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3343337 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3343747 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3343747 /var/tmp/bperf.sock 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3343747 ']' 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:25.784 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:25.784 [2024-07-13 20:21:13.418925] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:25.784 [2024-07-13 20:21:13.419020] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3343747 ] 00:33:25.784 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:25.784 Zero copy mechanism will not be used. 00:33:26.042 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.042 [2024-07-13 20:21:13.478396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.042 [2024-07-13 20:21:13.563366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.042 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:26.042 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:26.042 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:26.042 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:26.042 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:26.300 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.300 20:21:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.874 nvme0n1 00:33:26.874 20:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:26.874 20:21:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:26.874 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:26.874 Zero copy mechanism will not be used. 00:33:26.874 Running I/O for 2 seconds... 00:33:28.810 00:33:28.810 Latency(us) 00:33:28.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:28.810 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:28.810 nvme0n1 : 2.00 2710.83 338.85 0.00 0.00 5897.58 5194.33 14369.37 00:33:28.810 =================================================================================================================== 00:33:28.810 Total : 2710.83 338.85 0.00 0.00 5897.58 5194.33 14369.37 00:33:28.810 0 00:33:28.810 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:28.810 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:28.810 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:28.810 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:28.810 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:28.810 | select(.opcode=="crc32c") 00:33:28.811 | "\(.module_name) \(.executed)"' 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3343747 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3343747 ']' 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3343747 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3343747 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3343747' 00:33:29.070 killing process with pid 3343747 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3343747 00:33:29.070 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.070 00:33:29.070 Latency(us) 00:33:29.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.070 =================================================================================================================== 00:33:29.070 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.070 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3343747 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3344155 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3344155 /var/tmp/bperf.sock 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3344155 ']' 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:29.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:29.328 20:21:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:29.586 [2024-07-13 20:21:16.989717] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:29.586 [2024-07-13 20:21:16.989810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344155 ] 00:33:29.586 EAL: No free 2048 kB hugepages reported on node 1 00:33:29.586 [2024-07-13 20:21:17.052497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.586 [2024-07-13 20:21:17.140348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.586 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:29.586 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:29.586 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.586 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.586 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:30.154 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.154 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.414 nvme0n1 00:33:30.414 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:30.414 20:21:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.414 Running I/O for 2 seconds... 00:33:32.949 00:33:32.949 Latency(us) 00:33:32.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.949 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:32.949 nvme0n1 : 2.01 18898.25 73.82 0.00 0.00 6757.53 5437.06 16505.36 00:33:32.949 =================================================================================================================== 00:33:32.949 Total : 18898.25 73.82 0.00 0.00 6757.53 5437.06 16505.36 00:33:32.949 0 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:32.949 | select(.opcode=="crc32c") 00:33:32.949 | "\(.module_name) \(.executed)"' 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3344155 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3344155 ']' 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3344155 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3344155 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3344155' 00:33:32.949 killing process with pid 3344155 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3344155 00:33:32.949 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.949 00:33:32.949 Latency(us) 00:33:32.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.949 =================================================================================================================== 00:33:32.949 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3344155 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:32.949 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3344560 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3344560 /var/tmp/bperf.sock 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 3344560 ']' 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:32.950 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:32.950 [2024-07-13 20:21:20.540831] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:32.950 [2024-07-13 20:21:20.540950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3344560 ] 00:33:32.950 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:32.950 Zero copy mechanism will not be used. 00:33:32.950 EAL: No free 2048 kB hugepages reported on node 1 00:33:33.208 [2024-07-13 20:21:20.606535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.208 [2024-07-13 20:21:20.701044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.208 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.208 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:33.208 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:33.208 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:33.208 20:21:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:33.777 20:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.777 20:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:34.035 nvme0n1 00:33:34.035 20:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:34.035 20:21:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:34.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:34.295 Zero copy mechanism will not be used. 00:33:34.295 Running I/O for 2 seconds... 00:33:36.199 00:33:36.199 Latency(us) 00:33:36.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.199 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:36.199 nvme0n1 : 2.01 1906.11 238.26 0.00 0.00 8373.35 3398.16 11505.21 00:33:36.199 =================================================================================================================== 00:33:36.199 Total : 1906.11 238.26 0.00 0.00 8373.35 3398.16 11505.21 00:33:36.199 0 00:33:36.199 20:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:36.199 20:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:36.199 20:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:36.199 20:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:36.199 20:21:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:36.199 | select(.opcode=="crc32c") 00:33:36.199 | "\(.module_name) \(.executed)"' 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3344560 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3344560 ']' 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3344560 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3344560 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3344560' 00:33:36.457 killing process with pid 3344560 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3344560 00:33:36.457 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.457 00:33:36.457 Latency(us) 00:33:36.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.457 =================================================================================================================== 00:33:36.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.457 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3344560 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3343315 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 3343315 ']' 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 3343315 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3343315 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3343315' 00:33:36.715 killing process with pid 3343315 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 3343315 00:33:36.715 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 3343315 00:33:36.973 00:33:36.973 real 0m15.137s 00:33:36.973 user 0m30.541s 00:33:36.973 sys 0m3.814s 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.973 ************************************ 00:33:36.973 END TEST nvmf_digest_clean 00:33:36.973 ************************************ 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:36.973 ************************************ 00:33:36.973 START TEST nvmf_digest_error 00:33:36.973 ************************************ 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3345115 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3345115 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3345115 ']' 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:36.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:36.973 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.231 [2024-07-13 20:21:24.636167] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:37.231 [2024-07-13 20:21:24.636265] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:37.231 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.231 [2024-07-13 20:21:24.699726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.231 [2024-07-13 20:21:24.782220] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:37.231 [2024-07-13 20:21:24.782274] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:37.231 [2024-07-13 20:21:24.782297] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:37.231 [2024-07-13 20:21:24.782307] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:37.231 [2024-07-13 20:21:24.782317] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:37.231 [2024-07-13 20:21:24.782343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.232 [2024-07-13 20:21:24.866924] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:37.232 20:21:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.490 null0 00:33:37.490 [2024-07-13 20:21:24.977336] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:37.490 [2024-07-13 20:21:25.001549] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3345135 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3345135 /var/tmp/bperf.sock 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3345135 ']' 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:37.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:37.490 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:37.490 [2024-07-13 20:21:25.048074] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:37.490 [2024-07-13 20:21:25.048158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345135 ] 00:33:37.490 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.490 [2024-07-13 20:21:25.109159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.748 [2024-07-13 20:21:25.202490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:37.748 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:37.748 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:37.748 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:37.748 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:38.006 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:38.006 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.006 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.006 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.006 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.006 20:21:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:38.575 nvme0n1 00:33:38.575 20:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:38.575 20:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.575 20:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:38.575 20:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.575 20:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:38.575 20:21:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:38.575 Running I/O for 2 seconds... 00:33:38.575 [2024-07-13 20:21:26.142766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.142826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.142847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.575 [2024-07-13 20:21:26.154512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.154549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.154569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.575 [2024-07-13 20:21:26.168415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.168452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.168485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.575 [2024-07-13 20:21:26.181576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.181605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.181637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.575 [2024-07-13 20:21:26.195983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.196028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.196046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.575 [2024-07-13 20:21:26.208528] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.208562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.208581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.575 [2024-07-13 20:21:26.220261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.575 [2024-07-13 20:21:26.220288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.575 [2024-07-13 20:21:26.220319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.234238] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.234268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.234301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.247003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.247031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.247063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.262234] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.262264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.262280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.273736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.273769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.273795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.288100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.288128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.288161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.301929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.301957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.301989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.313949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.313977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.314007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.329780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.834 [2024-07-13 20:21:26.329815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.834 [2024-07-13 20:21:26.329834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.834 [2024-07-13 20:21:26.343314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.343349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.343369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.356221] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.356250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.356283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.369239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.369273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.369293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.381250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.381278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.381310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.393923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.393956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.393988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.407173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.407218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.407235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.420400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.420429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.420461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.433023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.433052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.433083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.447156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.447190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.447209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.460817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.460846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.460887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.472550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.472578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.472609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:38.835 [2024-07-13 20:21:26.487116] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:38.835 [2024-07-13 20:21:26.487145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:38.835 [2024-07-13 20:21:26.487162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.499060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.499088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.499104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.513404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.513432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.513464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.526839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.526891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.526924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.540585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.540613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.540644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.554716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.554743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.554776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.566217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.566244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.566276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.579181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.579209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.579225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.593268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.593296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.593311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.604657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.604691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.604710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.618782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.618816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.618841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.631836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.631881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.631903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.646039] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.646069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.646086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.657152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.657182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.657214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.673517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.673546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.673578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.687064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.687095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.687112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.699358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.699392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.699411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.713136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.713166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.713199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.725276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.725321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.725337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.094 [2024-07-13 20:21:26.739552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.094 [2024-07-13 20:21:26.739604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.094 [2024-07-13 20:21:26.739621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.752206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.752235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.752266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.764876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.764903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.764934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.778480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.778513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.778531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.791798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.791826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.791858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.803258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.803287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.817858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.817898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.817918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.830200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.830228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.830259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.844389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.844417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.844448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.855420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.855447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.855478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.872111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.872140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.872173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.884502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.884529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.884560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.896713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.896741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.896773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.910309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.910341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.910360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.921680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.921712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.921731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.935953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.935981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.935997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.947962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.947989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.948019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.962217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.962256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.962276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.974614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.974641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.974673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:26.988483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:26.988510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:26.988543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.353 [2024-07-13 20:21:27.001094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.353 [2024-07-13 20:21:27.001121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.353 [2024-07-13 20:21:27.001152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.612 [2024-07-13 20:21:27.013222] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.612 [2024-07-13 20:21:27.013256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.612 [2024-07-13 20:21:27.013275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.612 [2024-07-13 20:21:27.028645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.612 [2024-07-13 20:21:27.028678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.612 [2024-07-13 20:21:27.028697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.612 [2024-07-13 20:21:27.042581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.612 [2024-07-13 20:21:27.042614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.612 [2024-07-13 20:21:27.042633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.612 [2024-07-13 20:21:27.053814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.612 [2024-07-13 20:21:27.053857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.612 [2024-07-13 20:21:27.053885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.612 [2024-07-13 20:21:27.066935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.612 [2024-07-13 20:21:27.066963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.612 [2024-07-13 20:21:27.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.612 [2024-07-13 20:21:27.081836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.612 [2024-07-13 20:21:27.081887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.081904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.093960] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.093991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.107355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.107384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.107416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.118972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.119032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.134463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.134491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.134523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.145577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.145610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.145629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.160062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.160095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.160113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.173681] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.173709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.173740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.186027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.186055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.186092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.198688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.198721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.198740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.212935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.212982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.212999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.224828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.224861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.224888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.238235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.238267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.238287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.252602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.252635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.252654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.613 [2024-07-13 20:21:27.266015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.613 [2024-07-13 20:21:27.266044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.613 [2024-07-13 20:21:27.266076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.278088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.278116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.278132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.291901] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.291928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.291959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.304333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.304365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.304396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.317160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.317187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.317218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.329514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.329542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.329573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.343788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.343848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.355790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.355832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.355849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.368510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.368537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.368568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.382121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.872 [2024-07-13 20:21:27.382149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.872 [2024-07-13 20:21:27.382184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.872 [2024-07-13 20:21:27.395353] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.395396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.395412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.407535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.407582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.407601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.419765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.419793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.419826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.433937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.433965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.433997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.446208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.446241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.446260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.461589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.461624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.461643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.474575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.474603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.474633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.487864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.487904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.487921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.501265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.501299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.501318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.512089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.512133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.512149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:39.873 [2024-07-13 20:21:27.527618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:39.873 [2024-07-13 20:21:27.527647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:39.873 [2024-07-13 20:21:27.527685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.541951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.541981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.542014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.553633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.553662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.553694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.567285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.567313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.567345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.579175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.579203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.579218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.595094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.595123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.595154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.606103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.606136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.606155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.620185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.620214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.620245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.634534] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.634562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.634594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.646446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.646491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.646510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.660217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.660244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.660275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.672211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.672255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.672270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.685495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.685524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.685555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.697601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.697628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.697644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.714410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.714439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.714471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.729679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.729722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.729738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.740955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.740984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.133 [2024-07-13 20:21:27.741017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.133 [2024-07-13 20:21:27.755283] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.133 [2024-07-13 20:21:27.755312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.134 [2024-07-13 20:21:27.755350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.134 [2024-07-13 20:21:27.768659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.134 [2024-07-13 20:21:27.768687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.134 [2024-07-13 20:21:27.768718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.134 [2024-07-13 20:21:27.779728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.134 [2024-07-13 20:21:27.779757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.134 [2024-07-13 20:21:27.779788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.393 [2024-07-13 20:21:27.793509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.393 [2024-07-13 20:21:27.793543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.793562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.806980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.807026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.807041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.821017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.821062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.821079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.832287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.832316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.832348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.846717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.846746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.846778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.859011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.859039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.859071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.871987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.872025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.872044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.884194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.884223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.884255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.898806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.898834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.898873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.913561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.913589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.913621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.925444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.925472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.925489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.940647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.940676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.940708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.951665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.951698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.951716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.965552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.965581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.965612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.980026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.980053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.980084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:27.991071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:27.991104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:27.991122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:28.005821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:28.005849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:28.005890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:28.017200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:28.017233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:28.017252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:28.031903] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:28.031932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:28.031964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.394 [2024-07-13 20:21:28.044120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.394 [2024-07-13 20:21:28.044149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.394 [2024-07-13 20:21:28.044181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-13 20:21:28.057141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.654 [2024-07-13 20:21:28.057184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.654 [2024-07-13 20:21:28.057199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-13 20:21:28.072813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.654 [2024-07-13 20:21:28.072856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.654 [2024-07-13 20:21:28.072883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-13 20:21:28.083746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.654 [2024-07-13 20:21:28.083789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.654 [2024-07-13 20:21:28.083805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-13 20:21:28.097651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.654 [2024-07-13 20:21:28.097679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.654 [2024-07-13 20:21:28.097717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-13 20:21:28.112727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.654 [2024-07-13 20:21:28.112770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.654 [2024-07-13 20:21:28.112786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 [2024-07-13 20:21:28.125707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a178d0) 00:33:40.654 [2024-07-13 20:21:28.125734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:40.654 [2024-07-13 20:21:28.125765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:40.654 00:33:40.654 Latency(us) 00:33:40.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.654 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:40.654 nvme0n1 : 2.00 19274.36 75.29 0.00 0.00 6632.45 3713.71 17961.72 00:33:40.654 =================================================================================================================== 00:33:40.654 Total : 19274.36 75.29 0.00 0.00 6632.45 3713.71 17961.72 00:33:40.654 0 00:33:40.654 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:40.654 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:40.654 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:40.654 | .driver_specific 00:33:40.654 | .nvme_error 00:33:40.654 | .status_code 00:33:40.654 | .command_transient_transport_error' 00:33:40.654 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:40.914 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 151 > 0 )) 00:33:40.914 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3345135 00:33:40.914 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3345135 ']' 00:33:40.914 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3345135 00:33:40.914 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3345135 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3345135' 00:33:40.915 killing process with pid 3345135 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3345135 00:33:40.915 Received shutdown signal, test time was about 2.000000 seconds 00:33:40.915 00:33:40.915 Latency(us) 00:33:40.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.915 =================================================================================================================== 00:33:40.915 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:40.915 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3345135 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3345559 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3345559 /var/tmp/bperf.sock 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3345559 ']' 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:41.174 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.174 [2024-07-13 20:21:28.687445] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:41.174 [2024-07-13 20:21:28.687524] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3345559 ] 00:33:41.174 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:41.174 Zero copy mechanism will not be used. 00:33:41.174 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.174 [2024-07-13 20:21:28.750876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.432 [2024-07-13 20:21:28.838539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.433 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:41.433 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:41.433 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.433 20:21:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:41.723 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:41.723 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.723 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:41.723 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.723 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.723 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.291 nvme0n1 00:33:42.291 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:42.291 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.291 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:42.291 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.291 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:42.291 20:21:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:42.291 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:42.291 Zero copy mechanism will not be used. 00:33:42.291 Running I/O for 2 seconds... 00:33:42.291 [2024-07-13 20:21:29.818117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.818822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.818846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.833152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.833205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.833224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.849047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.849093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.849110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.864588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.864624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.864643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.879810] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.879846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.894446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.894482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.894501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.909239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.909275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.909295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.924512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.924546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.924566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.291 [2024-07-13 20:21:29.939826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.291 [2024-07-13 20:21:29.939862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.291 [2024-07-13 20:21:29.939890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.550 [2024-07-13 20:21:29.956050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.550 [2024-07-13 20:21:29.956702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.550 [2024-07-13 20:21:29.956727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.550 [2024-07-13 20:21:29.971769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.550 [2024-07-13 20:21:29.971893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.550 [2024-07-13 20:21:29.971932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.550 [2024-07-13 20:21:29.986356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.550 [2024-07-13 20:21:29.986391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.550 [2024-07-13 20:21:29.986411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.550 [2024-07-13 20:21:30.001038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.550 [2024-07-13 20:21:30.001069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.550 [2024-07-13 20:21:30.001087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.550 [2024-07-13 20:21:30.016343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.016408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.031363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.031405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.031425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.046479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.046520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.046551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.062028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.062060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.062079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.076745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.076780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.076801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.091161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.091194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.091227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.107394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.107430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.107449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.122632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.122666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.122686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.137708] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.137743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.137764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.152922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.152953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.152987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.167179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.167227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.182634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.182675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.182695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.551 [2024-07-13 20:21:30.197477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.551 [2024-07-13 20:21:30.197512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.551 [2024-07-13 20:21:30.197531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.212593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.212629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.212648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.226830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.226872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.226893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.241032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.241062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.241095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.255845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.255887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.255921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.270824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.270859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.270886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.286986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.287017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.287050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.301926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.301956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.301995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.316440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.316475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.316494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.332050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.332081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.332099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.347950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.811 [2024-07-13 20:21:30.347983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.811 [2024-07-13 20:21:30.348000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.811 [2024-07-13 20:21:30.361710] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.361744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.361763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.812 [2024-07-13 20:21:30.376148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.376195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.376212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.812 [2024-07-13 20:21:30.392104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.392150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.392167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:42.812 [2024-07-13 20:21:30.407081] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.407112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.407144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:42.812 [2024-07-13 20:21:30.422512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.422547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.422566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:42.812 [2024-07-13 20:21:30.438186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.438241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.438261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:42.812 [2024-07-13 20:21:30.454496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:42.812 [2024-07-13 20:21:30.454530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:42.812 [2024-07-13 20:21:30.454550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.470459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.470494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.470514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.486796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.486832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.486851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.503645] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.503681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.503700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.520661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.520696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.520715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.537188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.537237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.537256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.552694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.552729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.552748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.567245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.567279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.567299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.582682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.582717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.582736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.598459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.598494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.598513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.613073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.613103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.613136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.628391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.628427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.628447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.643458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.643494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.643513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.659096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.659126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.659159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.674447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.674482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.674501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.689692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.689727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.689747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.704690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.704728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.704757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.071 [2024-07-13 20:21:30.719337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.071 [2024-07-13 20:21:30.719374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.071 [2024-07-13 20:21:30.719393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.735068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.735115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.735133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.750572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.750607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.750626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.765635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.765670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.765690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.780680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.780714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.780733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.796436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.796471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.796490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.811073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.811118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.811135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.825596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.825644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.825664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.839822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.839857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.839886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.854448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.854484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.854503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.870930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.871269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.871297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.887193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.887240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.901099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.901146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.901164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.915909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.915958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.915976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.930680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.930711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.930744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.943549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.943578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.330 [2024-07-13 20:21:30.943610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.330 [2024-07-13 20:21:30.958922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.330 [2024-07-13 20:21:30.958954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.331 [2024-07-13 20:21:30.958991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.331 [2024-07-13 20:21:30.972759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.331 [2024-07-13 20:21:30.972788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.331 [2024-07-13 20:21:30.972818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.589 [2024-07-13 20:21:30.987309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:30.987345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:30.987365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.002514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.002549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.002568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.018446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.018482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.018501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.033365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.033400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.033419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.048782] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.048817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.048835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.063089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.063119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.063135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.078390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.078425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.078445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.093689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.093731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.093751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.108008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.108039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.108056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.122820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.122855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.122883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.137506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.137540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.137559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.153816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.153851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.153879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.169016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.169047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.169065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.184207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.184261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.199602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.199636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.199656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.214375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.214410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.214429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.229744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.229780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.229800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.590 [2024-07-13 20:21:31.244727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.590 [2024-07-13 20:21:31.244761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.590 [2024-07-13 20:21:31.244781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.260139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.260194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.260212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.276016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.276048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.276066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.290475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.290510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.290538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.304639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.304674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.304694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.320049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.320095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.320112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.335145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.335190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.335207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.350654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.350688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.350714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.365792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.365827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.365846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.380857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.380914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.380933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.396161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.396205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.396224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.411640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.411674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.411694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.425548] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.425583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.425602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.440524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.440558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.440577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.456120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.456150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.456182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.470726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.470760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.470779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.485446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.485500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:43.849 [2024-07-13 20:21:31.499913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:43.849 [2024-07-13 20:21:31.499943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:43.849 [2024-07-13 20:21:31.499959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.515240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.515275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.515294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.530101] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.530131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.530166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.545433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.545468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.545487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.559696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.559730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.559749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.574476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.574509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.574529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.589631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.589665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.589685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.604344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.604379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.604403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.619904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.619935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.619953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.636437] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.636471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.636490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.651561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.651596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.651615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.665833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.665874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.665910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.680282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.680315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.680335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.696719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.696753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.696773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.712196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.712241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.712257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.726525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.726559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.726578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.741830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.741878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.741900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.108 [2024-07-13 20:21:31.758068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.108 [2024-07-13 20:21:31.758113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.108 [2024-07-13 20:21:31.758130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:44.366 [2024-07-13 20:21:31.772398] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.367 [2024-07-13 20:21:31.772433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.367 [2024-07-13 20:21:31.772452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:44.367 [2024-07-13 20:21:31.787705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.367 [2024-07-13 20:21:31.787740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.367 [2024-07-13 20:21:31.787759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:44.367 [2024-07-13 20:21:31.802724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dc62c0) 00:33:44.367 [2024-07-13 20:21:31.802758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.367 [2024-07-13 20:21:31.802778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:44.367 00:33:44.367 Latency(us) 00:33:44.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.367 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:44.367 nvme0n1 : 2.01 2047.66 255.96 0.00 0.00 7808.03 6310.87 17767.54 00:33:44.367 =================================================================================================================== 00:33:44.367 Total : 2047.66 255.96 0.00 0.00 7808.03 6310.87 17767.54 00:33:44.367 0 00:33:44.367 20:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:44.367 20:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:44.367 20:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:44.367 | .driver_specific 00:33:44.367 | .nvme_error 00:33:44.367 | .status_code 00:33:44.367 | .command_transient_transport_error' 00:33:44.367 20:21:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3345559 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3345559 ']' 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3345559 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3345559 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3345559' 00:33:44.627 killing process with pid 3345559 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3345559 00:33:44.627 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.627 00:33:44.627 Latency(us) 00:33:44.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.627 =================================================================================================================== 00:33:44.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.627 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3345559 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3346071 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3346071 /var/tmp/bperf.sock 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3346071 ']' 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:44.885 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.885 [2024-07-13 20:21:32.372898] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:44.885 [2024-07-13 20:21:32.372983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346071 ] 00:33:44.885 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.885 [2024-07-13 20:21:32.435959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.885 [2024-07-13 20:21:32.527351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.143 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:45.143 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:45.143 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:45.143 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:45.401 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:45.401 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.401 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.401 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.401 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.401 20:21:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.659 nvme0n1 00:33:45.918 20:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:45.918 20:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.918 20:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.918 20:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.918 20:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:45.918 20:21:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.918 Running I/O for 2 seconds... 00:33:45.918 [2024-07-13 20:21:33.459014] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.459276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.459315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.471685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.471942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.471982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.484127] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.484388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.484425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.496674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.496929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.496968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.509306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.509548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.509585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.521643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.521907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.521946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.534106] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.534349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.534387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.546434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.546674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.546711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.558933] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.559187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.559239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.918 [2024-07-13 20:21:33.571483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:45.918 [2024-07-13 20:21:33.571727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.918 [2024-07-13 20:21:33.571763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.178 [2024-07-13 20:21:33.584421] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.178 [2024-07-13 20:21:33.584662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.178 [2024-07-13 20:21:33.584700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.178 [2024-07-13 20:21:33.596882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.178 [2024-07-13 20:21:33.597146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.178 [2024-07-13 20:21:33.597184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.178 [2024-07-13 20:21:33.609295] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.178 [2024-07-13 20:21:33.609537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.178 [2024-07-13 20:21:33.609573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.621623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.621907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.621946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.634027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.634285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.634314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.646520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.646764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.646802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.659136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.659389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.659419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.671703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.671952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.671990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.684055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.684301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.684337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.696400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.696659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.696695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.708752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.709021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.709059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.721177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.721425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.721463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.733560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.733800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.733843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.745957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.746217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.746254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.758309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.758576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.758614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.770674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.770930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.770968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.784009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.784255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.784296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.796434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.796674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.796711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.808940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.809204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.809233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.821217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.821459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.821495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.179 [2024-07-13 20:21:33.834092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.179 [2024-07-13 20:21:33.834391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.179 [2024-07-13 20:21:33.834428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.439 [2024-07-13 20:21:33.846902] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.439 [2024-07-13 20:21:33.847151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.439 [2024-07-13 20:21:33.847203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.439 [2024-07-13 20:21:33.859424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.439 [2024-07-13 20:21:33.859707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.439 [2024-07-13 20:21:33.859745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.439 [2024-07-13 20:21:33.872020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.439 [2024-07-13 20:21:33.872280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.439 [2024-07-13 20:21:33.872317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.884438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.884678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.884714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.896905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.897152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.897204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.909416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.909656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.909692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.921745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.922011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.922042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.934129] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.934371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.934407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.946492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.946750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.946779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.958785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.959091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.971298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.971537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.971567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.983759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.984028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.984065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:33.996192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:33.996429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:33.996466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.008547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.008787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.008822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.020817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.021086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.021124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.033194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.033452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.033488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.045530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.045768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.045804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.058355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.058613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.058663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.072047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.072327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.072369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.440 [2024-07-13 20:21:34.085683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.440 [2024-07-13 20:21:34.085948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.440 [2024-07-13 20:21:34.085984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.698 [2024-07-13 20:21:34.099734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.100002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.100040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.113308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.113570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.113612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.127028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.127297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.127339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.140655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.140926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.140964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.154267] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.154522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.154563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.167851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.168126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.168162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.181517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.181780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.181822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.195101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.195383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.195425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.208674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.208941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.208978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.222323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.222577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.222619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.235882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.236139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.236190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.249476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.249733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.249773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.263063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.263333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.263374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.276591] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.276848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.276913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.290188] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.290443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.290484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.303752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.304018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.304054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.317371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.317634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.317676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.331048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.331323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.331365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.699 [2024-07-13 20:21:34.344700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.699 [2024-07-13 20:21:34.344979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.699 [2024-07-13 20:21:34.345017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.358760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.359039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.359069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.372627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.372891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.372939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.386432] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.386689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.386731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.400043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.400317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.400358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.413726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.413989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.414038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.427294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.427555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.427597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.440880] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.441133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.441184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.454544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.454803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.454845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.468195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.468464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.468495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.482079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.482354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.482395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.495737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.496000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.496037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.509482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.509741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.509782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.523075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.523350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.523392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.536663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.536938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.536975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.550264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.550520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.550553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.563814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.564078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.564116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.577369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.577623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.577664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.590982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.591253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.958 [2024-07-13 20:21:34.591294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:46.958 [2024-07-13 20:21:34.604604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:46.958 [2024-07-13 20:21:34.604858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:46.959 [2024-07-13 20:21:34.604921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.218 [2024-07-13 20:21:34.618563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.218 [2024-07-13 20:21:34.618840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.218 [2024-07-13 20:21:34.618892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.218 [2024-07-13 20:21:34.632234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.218 [2024-07-13 20:21:34.632495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.632536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.645836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.646094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.646131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.659414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.659712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.673174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.673449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.673490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.686912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.687158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.687200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.700579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.700838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.700888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.714138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.714418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.714460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.727732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.728006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.728042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.741314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.741573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.741615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.754774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.755098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.768349] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.768606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.768655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.781976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.782263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.782304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.795590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.795844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.795895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.809227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.809487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.809528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.822894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.823147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.823201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.836476] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.836745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.836787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.850076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.850348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.850389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.219 [2024-07-13 20:21:34.863592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.219 [2024-07-13 20:21:34.863850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.219 [2024-07-13 20:21:34.863915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.478 [2024-07-13 20:21:34.877545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.478 [2024-07-13 20:21:34.877804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.478 [2024-07-13 20:21:34.877845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.478 [2024-07-13 20:21:34.891253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.478 [2024-07-13 20:21:34.891516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.478 [2024-07-13 20:21:34.891549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.478 [2024-07-13 20:21:34.904839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.478 [2024-07-13 20:21:34.905103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.478 [2024-07-13 20:21:34.905140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.478 [2024-07-13 20:21:34.918405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.478 [2024-07-13 20:21:34.918662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.478 [2024-07-13 20:21:34.918703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.478 [2024-07-13 20:21:34.931991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.478 [2024-07-13 20:21:34.932262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.478 [2024-07-13 20:21:34.932303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:34.945675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:34.945947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:34.945983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:34.959214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:34.959474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:34.959516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:34.972800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:34.973061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:34.973097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:34.986422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:34.986683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:34.986724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.000032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.000306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.000347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.013496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.013754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.013795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.027050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.027327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.027369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.040639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.040916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.040952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.054200] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.054456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.054496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.067442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.067703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.067744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.081247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.081507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.081548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.094601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.094856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.094903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.107238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.107476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.107505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.119718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.119975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.120020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.479 [2024-07-13 20:21:35.132372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.479 [2024-07-13 20:21:35.132628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.479 [2024-07-13 20:21:35.132658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.145370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.145610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.145646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.157823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.158111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.158141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.170236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.170476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.170511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.182639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.182897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.182935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.195084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.195327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.195363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.207503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.207743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.207779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.219830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.220101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.220138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.232329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.232575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.232610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.244779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.245046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.245084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.257156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.257413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.257450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.269601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.269840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.269901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.281829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.282098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.282136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.294277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.294515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.294550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.306557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.306796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.306832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.318787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.319043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.319080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.331154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.331398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.331434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.343436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.343695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.343732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.355860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.356128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.356164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.368280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.368540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.739 [2024-07-13 20:21:35.368576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.739 [2024-07-13 20:21:35.380664] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.739 [2024-07-13 20:21:35.380915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.740 [2024-07-13 20:21:35.380953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.740 [2024-07-13 20:21:35.393360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.740 [2024-07-13 20:21:35.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.740 [2024-07-13 20:21:35.393643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.998 [2024-07-13 20:21:35.406096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.998 [2024-07-13 20:21:35.406358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.998 [2024-07-13 20:21:35.406394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.998 [2024-07-13 20:21:35.418459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.998 [2024-07-13 20:21:35.418699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.998 [2024-07-13 20:21:35.418728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.998 [2024-07-13 20:21:35.430752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.998 [2024-07-13 20:21:35.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.998 [2024-07-13 20:21:35.431055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.998 [2024-07-13 20:21:35.443261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8910) with pdu=0x2000190fdeb0 00:33:47.998 [2024-07-13 20:21:35.443519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:47.998 [2024-07-13 20:21:35.443562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:47.998 00:33:47.998 Latency(us) 00:33:47.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.998 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:47.998 nvme0n1 : 2.01 19566.59 76.43 0.00 0.00 6526.11 2961.26 14078.10 00:33:47.998 =================================================================================================================== 00:33:47.998 Total : 19566.59 76.43 0.00 0.00 6526.11 2961.26 14078.10 00:33:47.998 0 00:33:47.998 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.998 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.998 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.998 | .driver_specific 00:33:47.998 | .nvme_error 00:33:47.998 | .status_code 00:33:47.998 | .command_transient_transport_error' 00:33:47.998 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 153 > 0 )) 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3346071 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3346071 ']' 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3346071 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3346071 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3346071' 00:33:48.256 killing process with pid 3346071 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3346071 00:33:48.256 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.256 00:33:48.256 Latency(us) 00:33:48.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.256 =================================================================================================================== 00:33:48.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.256 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3346071 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3346478 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3346478 /var/tmp/bperf.sock 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 3346478 ']' 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:48.514 20:21:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.514 [2024-07-13 20:21:36.025405] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:48.514 [2024-07-13 20:21:36.025485] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346478 ] 00:33:48.514 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:48.514 Zero copy mechanism will not be used. 00:33:48.514 EAL: No free 2048 kB hugepages reported on node 1 00:33:48.514 [2024-07-13 20:21:36.083750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.514 [2024-07-13 20:21:36.169402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.772 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:48.772 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:48.772 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.772 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:49.031 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:49.031 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.031 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.031 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.031 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.031 20:21:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.601 nvme0n1 00:33:49.601 20:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:49.601 20:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:49.601 20:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:49.601 20:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:49.601 20:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:49.601 20:21:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:49.601 Zero copy mechanism will not be used. 00:33:49.601 Running I/O for 2 seconds... 00:33:49.601 [2024-07-13 20:21:37.154085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.601 [2024-07-13 20:21:37.154317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.601 [2024-07-13 20:21:37.154366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.601 [2024-07-13 20:21:37.171691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.601 [2024-07-13 20:21:37.172116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.601 [2024-07-13 20:21:37.172160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.601 [2024-07-13 20:21:37.190368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.601 [2024-07-13 20:21:37.190833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.601 [2024-07-13 20:21:37.190873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.601 [2024-07-13 20:21:37.209078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.601 [2024-07-13 20:21:37.209508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.601 [2024-07-13 20:21:37.209541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.601 [2024-07-13 20:21:37.226313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.601 [2024-07-13 20:21:37.226820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.601 [2024-07-13 20:21:37.226871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.601 [2024-07-13 20:21:37.242601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.601 [2024-07-13 20:21:37.243050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.601 [2024-07-13 20:21:37.243078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.860 [2024-07-13 20:21:37.259412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.259845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.259882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.277338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.277768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.277812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.294104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.294506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.294551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.312054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.312479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.312524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.329158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.329463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.329507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.346251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.346825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.346873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.361407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.361893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.361936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.377284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.377678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.377721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.394637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.395095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.395139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.412874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.413373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.413414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.429903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.430315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.430359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.446418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.446829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.462678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.463051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.463079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.478503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.479022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.479062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.495528] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.496107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.496148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.861 [2024-07-13 20:21:37.512497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:49.861 [2024-07-13 20:21:37.512976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.861 [2024-07-13 20:21:37.513003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.120 [2024-07-13 20:21:37.528206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.120 [2024-07-13 20:21:37.528614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.120 [2024-07-13 20:21:37.528659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.120 [2024-07-13 20:21:37.543764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.120 [2024-07-13 20:21:37.544178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.120 [2024-07-13 20:21:37.544205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.120 [2024-07-13 20:21:37.562299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.120 [2024-07-13 20:21:37.562787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.120 [2024-07-13 20:21:37.562814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.120 [2024-07-13 20:21:37.578512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.120 [2024-07-13 20:21:37.578927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.120 [2024-07-13 20:21:37.578955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.120 [2024-07-13 20:21:37.593680] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.120 [2024-07-13 20:21:37.594087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.120 [2024-07-13 20:21:37.594129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.120 [2024-07-13 20:21:37.610374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.120 [2024-07-13 20:21:37.610797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.610837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.626905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.627411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.627453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.644174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.644541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.644568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.661760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.662160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.662206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.678135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.678582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.696243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.696639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.696665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.711431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.711931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.711973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.728423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.728797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.728839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.744653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.745096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.745123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.121 [2024-07-13 20:21:37.762111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.121 [2024-07-13 20:21:37.762555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.121 [2024-07-13 20:21:37.762582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.778506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.778979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.796043] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.796473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.796514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.813629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.814183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.814225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.830226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.830631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.830658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.846298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.846686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.846712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.863861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.864285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.864330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.879813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.880236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.880286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.897165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.897706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.897731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.914025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.914398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.914424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.931292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.931598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.931626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.948504] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.948906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.948948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.965214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.965689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.965715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.982394] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.982888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:37.998260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:37.998651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:37.998679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:38.015481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:38.015895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:38.015921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.379 [2024-07-13 20:21:38.032000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.379 [2024-07-13 20:21:38.032442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.379 [2024-07-13 20:21:38.032469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.049313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.049781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.049823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.065996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.066384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.066411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.082515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.083032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.100024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.100441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.100486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.116509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.116978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.117007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.134472] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.134936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.134964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.151864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.152271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.152298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.169135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.169505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.169545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.186358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.186759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.186788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.203608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.203986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.204014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.220898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.221299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.221341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.238049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.238525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.704 [2024-07-13 20:21:38.238551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.704 [2024-07-13 20:21:38.253966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.704 [2024-07-13 20:21:38.254390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.705 [2024-07-13 20:21:38.254417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.705 [2024-07-13 20:21:38.271415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.705 [2024-07-13 20:21:38.271882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.705 [2024-07-13 20:21:38.271927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.705 [2024-07-13 20:21:38.289331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.705 [2024-07-13 20:21:38.289636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.705 [2024-07-13 20:21:38.289663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.705 [2024-07-13 20:21:38.306080] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.705 [2024-07-13 20:21:38.306536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.705 [2024-07-13 20:21:38.306573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.705 [2024-07-13 20:21:38.319131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.705 [2024-07-13 20:21:38.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.705 [2024-07-13 20:21:38.319604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.705 [2024-07-13 20:21:38.333968] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.705 [2024-07-13 20:21:38.334378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.705 [2024-07-13 20:21:38.334424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.350610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.351097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.351139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.366856] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.367272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.367298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.384522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.384982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.385010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.401732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.402265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.402291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.418024] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.418359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.418386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.433741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.434140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.434183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.450213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.450765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.450794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.466854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.467265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.467308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.484771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.485197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.485223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.502071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.502620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.502647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.519356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.519768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.519809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.536941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.537331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.537375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.553669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.554076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.554119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.571424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.571888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.571934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.965 [2024-07-13 20:21:38.588644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.965 [2024-07-13 20:21:38.589079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.965 [2024-07-13 20:21:38.589106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.966 [2024-07-13 20:21:38.606517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:50.966 [2024-07-13 20:21:38.606935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.966 [2024-07-13 20:21:38.606980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.224 [2024-07-13 20:21:38.624388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.224 [2024-07-13 20:21:38.624813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.224 [2024-07-13 20:21:38.624858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.224 [2024-07-13 20:21:38.641677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.224 [2024-07-13 20:21:38.642053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.224 [2024-07-13 20:21:38.642096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.224 [2024-07-13 20:21:38.658985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.224 [2024-07-13 20:21:38.659391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.224 [2024-07-13 20:21:38.659437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.224 [2024-07-13 20:21:38.676411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.676800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.676841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.693581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.694016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.694043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.711478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.711895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.711923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.728971] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.729367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.729410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.746111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.746362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.746389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.763429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.763732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.763759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.780581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.781142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.781185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.795146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.795569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.795610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.811412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.811802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.811843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.829236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.829744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.829770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.846439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.846884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.863777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.864202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.864256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.225 [2024-07-13 20:21:38.880189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.225 [2024-07-13 20:21:38.880581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.225 [2024-07-13 20:21:38.880626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.896775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.897312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.897358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.913985] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.914404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.914451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.930753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.931057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.931086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.947262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.947639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.947667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.964507] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.964921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.964950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.980987] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.981446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.981472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:38.997388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:38.997880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:38.997907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.014187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.014655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.014681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.030032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.030501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.030527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.047115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.047501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.047548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.063057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.063547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.063579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.079939] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.080324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.080351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.095911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.096360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.096406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.113416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.113863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.113896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:51.485 [2024-07-13 20:21:39.129911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.485 [2024-07-13 20:21:39.130329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.485 [2024-07-13 20:21:39.130357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:51.743 [2024-07-13 20:21:39.145812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1bb8c50) with pdu=0x2000190fef90 00:33:51.743 [2024-07-13 20:21:39.146291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.743 [2024-07-13 20:21:39.146319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:51.743 00:33:51.743 Latency(us) 00:33:51.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.743 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:51.743 nvme0n1 : 2.01 1839.38 229.92 0.00 0.00 8674.15 4490.43 19223.89 00:33:51.743 =================================================================================================================== 00:33:51.743 Total : 1839.38 229.92 0.00 0.00 8674.15 4490.43 19223.89 00:33:51.743 0 00:33:51.743 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:51.743 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:51.743 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:51.743 | .driver_specific 00:33:51.743 | .nvme_error 00:33:51.743 | .status_code 00:33:51.743 | .command_transient_transport_error' 00:33:51.743 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 119 > 0 )) 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3346478 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3346478 ']' 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3346478 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3346478 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3346478' 00:33:52.002 killing process with pid 3346478 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3346478 00:33:52.002 Received shutdown signal, test time was about 2.000000 seconds 00:33:52.002 00:33:52.002 Latency(us) 00:33:52.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:52.002 =================================================================================================================== 00:33:52.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:52.002 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3346478 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3345115 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 3345115 ']' 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 3345115 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3345115 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3345115' 00:33:52.262 killing process with pid 3345115 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 3345115 00:33:52.262 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 3345115 00:33:52.521 00:33:52.521 real 0m15.341s 00:33:52.521 user 0m29.542s 00:33:52.521 sys 0m3.934s 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.521 ************************************ 00:33:52.521 END TEST nvmf_digest_error 00:33:52.521 ************************************ 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:52.521 rmmod nvme_tcp 00:33:52.521 rmmod nvme_fabrics 00:33:52.521 rmmod nvme_keyring 00:33:52.521 20:21:39 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3345115 ']' 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3345115 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 3345115 ']' 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 3345115 00:33:52.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3345115) - No such process 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 3345115 is not found' 00:33:52.521 Process with pid 3345115 is not found 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.521 20:21:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.428 20:21:42 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:54.428 00:33:54.428 real 0m34.791s 00:33:54.428 user 1m0.858s 00:33:54.428 sys 0m9.273s 00:33:54.428 20:21:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:54.428 20:21:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:54.428 ************************************ 00:33:54.428 END TEST nvmf_digest 00:33:54.428 ************************************ 00:33:54.428 20:21:42 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:54.428 20:21:42 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:54.428 20:21:42 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:54.428 20:21:42 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:54.428 20:21:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:54.428 20:21:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:54.428 20:21:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:54.686 ************************************ 00:33:54.686 START TEST nvmf_bdevperf 00:33:54.686 ************************************ 00:33:54.686 20:21:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:54.687 * Looking for test storage... 00:33:54.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:54.687 20:21:42 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:56.589 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:56.589 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:56.589 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:56.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:56.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:56.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:56.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:33:56.590 00:33:56.590 --- 10.0.0.2 ping statistics --- 00:33:56.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.590 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:56.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:56.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:33:56.590 00:33:56.590 --- 10.0.0.1 ping statistics --- 00:33:56.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:56.590 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3348828 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3348828 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3348828 ']' 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:56.590 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.590 [2024-07-13 20:21:44.220950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:56.590 [2024-07-13 20:21:44.221022] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.848 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.848 [2024-07-13 20:21:44.289968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:56.848 [2024-07-13 20:21:44.374656] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.848 [2024-07-13 20:21:44.374710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.848 [2024-07-13 20:21:44.374738] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.848 [2024-07-13 20:21:44.374749] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.848 [2024-07-13 20:21:44.374758] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.848 [2024-07-13 20:21:44.374825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:56.848 [2024-07-13 20:21:44.374910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:56.848 [2024-07-13 20:21:44.374914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.848 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:56.848 [2024-07-13 20:21:44.500309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.106 Malloc0 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.106 [2024-07-13 20:21:44.560392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:57.106 { 00:33:57.106 "params": { 00:33:57.106 "name": "Nvme$subsystem", 00:33:57.106 "trtype": "$TEST_TRANSPORT", 00:33:57.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:57.106 "adrfam": "ipv4", 00:33:57.106 "trsvcid": "$NVMF_PORT", 00:33:57.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:57.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:57.106 "hdgst": ${hdgst:-false}, 00:33:57.106 "ddgst": ${ddgst:-false} 00:33:57.106 }, 00:33:57.106 "method": "bdev_nvme_attach_controller" 00:33:57.106 } 00:33:57.106 EOF 00:33:57.106 )") 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:57.106 20:21:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:57.106 "params": { 00:33:57.106 "name": "Nvme1", 00:33:57.106 "trtype": "tcp", 00:33:57.106 "traddr": "10.0.0.2", 00:33:57.106 "adrfam": "ipv4", 00:33:57.106 "trsvcid": "4420", 00:33:57.106 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:57.106 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:57.106 "hdgst": false, 00:33:57.106 "ddgst": false 00:33:57.106 }, 00:33:57.106 "method": "bdev_nvme_attach_controller" 00:33:57.106 }' 00:33:57.106 [2024-07-13 20:21:44.605797] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:57.106 [2024-07-13 20:21:44.605907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348860 ] 00:33:57.106 EAL: No free 2048 kB hugepages reported on node 1 00:33:57.106 [2024-07-13 20:21:44.669152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.106 [2024-07-13 20:21:44.756160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.672 Running I/O for 1 seconds... 00:33:58.612 00:33:58.612 Latency(us) 00:33:58.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:58.612 Verification LBA range: start 0x0 length 0x4000 00:33:58.612 Nvme1n1 : 1.00 8604.76 33.61 0.00 0.00 14812.72 1122.61 18350.08 00:33:58.612 =================================================================================================================== 00:33:58.612 Total : 8604.76 33.61 0.00 0.00 14812.72 1122.61 18350.08 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3349118 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:58.870 { 00:33:58.870 "params": { 00:33:58.870 "name": "Nvme$subsystem", 00:33:58.870 "trtype": "$TEST_TRANSPORT", 00:33:58.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.870 "adrfam": "ipv4", 00:33:58.870 "trsvcid": "$NVMF_PORT", 00:33:58.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.870 "hdgst": ${hdgst:-false}, 00:33:58.870 "ddgst": ${ddgst:-false} 00:33:58.870 }, 00:33:58.870 "method": "bdev_nvme_attach_controller" 00:33:58.870 } 00:33:58.870 EOF 00:33:58.870 )") 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:58.870 20:21:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:58.870 "params": { 00:33:58.870 "name": "Nvme1", 00:33:58.870 "trtype": "tcp", 00:33:58.870 "traddr": "10.0.0.2", 00:33:58.870 "adrfam": "ipv4", 00:33:58.870 "trsvcid": "4420", 00:33:58.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:58.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:58.870 "hdgst": false, 00:33:58.870 "ddgst": false 00:33:58.870 }, 00:33:58.870 "method": "bdev_nvme_attach_controller" 00:33:58.870 }' 00:33:58.870 [2024-07-13 20:21:46.350586] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:33:58.870 [2024-07-13 20:21:46.350669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3349118 ] 00:33:58.870 EAL: No free 2048 kB hugepages reported on node 1 00:33:58.870 [2024-07-13 20:21:46.410585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:58.870 [2024-07-13 20:21:46.494807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.128 Running I/O for 15 seconds... 00:34:01.656 20:21:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3348828 00:34:01.919 20:21:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:01.919 [2024-07-13 20:21:49.323930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.323977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.324983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.324998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.919 [2024-07-13 20:21:49.325331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.919 [2024-07-13 20:21:49.325346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.325982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.325996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.920 [2024-07-13 20:21:49.326422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:01.920 [2024-07-13 20:21:49.326716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.920 [2024-07-13 20:21:49.326748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.920 [2024-07-13 20:21:49.326765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.326780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.326797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.326812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.326829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.326844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.326872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.326897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.326941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.326957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.326972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.326986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.327965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.327986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.921 [2024-07-13 20:21:49.328241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.921 [2024-07-13 20:21:49.328257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.922 [2024-07-13 20:21:49.328292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.922 [2024-07-13 20:21:49.328326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:01.922 [2024-07-13 20:21:49.328359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328376] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x885150 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.328395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:01.922 [2024-07-13 20:21:49.328409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:01.922 [2024-07-13 20:21:49.328421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57368 len:8 PRP1 0x0 PRP2 0x0 00:34:01.922 [2024-07-13 20:21:49.328437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328506] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x885150 was disconnected and freed. reset controller. 00:34:01.922 [2024-07-13 20:21:49.328586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.922 [2024-07-13 20:21:49.328610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.922 [2024-07-13 20:21:49.328655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.922 [2024-07-13 20:21:49.328683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.922 [2024-07-13 20:21:49.328726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.922 [2024-07-13 20:21:49.328739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.332637] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.332684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.333371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.333406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.333425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.333666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.333925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.333950] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.333969] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.337569] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.346921] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.347355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.347388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.347407] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.347649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.347906] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.347942] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.347959] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.351548] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.360888] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.361350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.361382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.361400] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.361639] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.361896] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.361920] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.361936] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.365521] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.374844] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.375283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.375315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.375334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.375579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.375822] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.375846] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.375862] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.379463] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.388791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.389228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.389260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.389278] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.389517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.389761] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.389785] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.389801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.393398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.402726] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.403184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.403216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.403234] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.403473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.403716] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.403740] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.403755] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.407354] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.416684] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.417133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.417174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.417190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.417439] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.417683] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.417707] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.417728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.421327] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.430654] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.431110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.431141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.922 [2024-07-13 20:21:49.431159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.922 [2024-07-13 20:21:49.431398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.922 [2024-07-13 20:21:49.431641] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.922 [2024-07-13 20:21:49.431665] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.922 [2024-07-13 20:21:49.431681] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.922 [2024-07-13 20:21:49.435277] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.922 [2024-07-13 20:21:49.444605] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.922 [2024-07-13 20:21:49.445062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.922 [2024-07-13 20:21:49.445093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.445111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.445350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.445593] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.445617] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.445633] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.449227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.458549] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.458999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.459031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.459049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.459289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.459532] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.459556] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.459572] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.463172] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.472516] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.472974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.473006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.473024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.473263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.473507] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.473531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.473546] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.477144] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.486464] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.486917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.486948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.486966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.487205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.487449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.487473] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.487489] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.491086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.500414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.500878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.500906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.500922] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.501179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.501423] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.501447] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.501463] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.505058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.514384] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.514831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.514862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.514892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.515131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.515381] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.515405] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.515422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.519017] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.528342] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.528778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.528810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.528828] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.529078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.529321] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.529345] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.529361] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.532983] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.542312] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.542760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.542791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.542809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.543060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.543305] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.543328] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.543344] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.546938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.556259] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.556695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.556727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.556745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.556995] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.923 [2024-07-13 20:21:49.557240] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.923 [2024-07-13 20:21:49.557264] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.923 [2024-07-13 20:21:49.557280] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:01.923 [2024-07-13 20:21:49.560882] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:01.923 [2024-07-13 20:21:49.570338] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:01.923 [2024-07-13 20:21:49.570796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.923 [2024-07-13 20:21:49.570828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:01.923 [2024-07-13 20:21:49.570846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:01.923 [2024-07-13 20:21:49.571095] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:01.924 [2024-07-13 20:21:49.571340] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:01.924 [2024-07-13 20:21:49.571364] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:01.924 [2024-07-13 20:21:49.571380] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.185 [2024-07-13 20:21:49.575106] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.185 [2024-07-13 20:21:49.584339] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.185 [2024-07-13 20:21:49.584941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.185 [2024-07-13 20:21:49.584974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.185 [2024-07-13 20:21:49.584994] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.185 [2024-07-13 20:21:49.585234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.185 [2024-07-13 20:21:49.585478] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.185 [2024-07-13 20:21:49.585502] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.185 [2024-07-13 20:21:49.585518] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.185 [2024-07-13 20:21:49.589112] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.185 [2024-07-13 20:21:49.598225] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.185 [2024-07-13 20:21:49.598779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.185 [2024-07-13 20:21:49.598828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.185 [2024-07-13 20:21:49.598846] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.185 [2024-07-13 20:21:49.599092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.185 [2024-07-13 20:21:49.599337] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.185 [2024-07-13 20:21:49.599361] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.185 [2024-07-13 20:21:49.599376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.185 [2024-07-13 20:21:49.602968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.185 [2024-07-13 20:21:49.612083] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.185 [2024-07-13 20:21:49.612516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.185 [2024-07-13 20:21:49.612547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.185 [2024-07-13 20:21:49.612571] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.612811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.613065] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.613090] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.613106] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.616692] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.626029] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.626476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.626507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.626525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.626764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.627020] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.627046] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.627061] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.630648] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.639985] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.640404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.640435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.640452] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.640691] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.640947] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.640972] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.640987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.644573] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.653925] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.654371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.654402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.654420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.654660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.654914] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.654944] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.654961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.658545] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.667911] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.668385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.668412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.668428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.668703] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.668957] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.668982] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.668998] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.672585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.681914] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.682343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.682375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.682393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.682632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.682886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.682911] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.682927] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.686509] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.695843] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.696324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.696366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.696383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.696632] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.696886] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.696910] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.696926] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.700514] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.709859] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.710288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.710320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.710338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.710577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.710821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.710844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.710860] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.714461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.723801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.724286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.724317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.724335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.724575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.724819] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.724842] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.724858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.728462] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.737804] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.738249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.738282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.738300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.738540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.738785] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.738809] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.738824] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.742422] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.751751] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.752211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.752238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.752270] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.186 [2024-07-13 20:21:49.752529] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.186 [2024-07-13 20:21:49.752774] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.186 [2024-07-13 20:21:49.752798] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.186 [2024-07-13 20:21:49.752815] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.186 [2024-07-13 20:21:49.756412] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.186 [2024-07-13 20:21:49.765753] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.186 [2024-07-13 20:21:49.766199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.186 [2024-07-13 20:21:49.766230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.186 [2024-07-13 20:21:49.766249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.187 [2024-07-13 20:21:49.766489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.187 [2024-07-13 20:21:49.766732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.187 [2024-07-13 20:21:49.766756] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.187 [2024-07-13 20:21:49.766772] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.187 [2024-07-13 20:21:49.770373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.187 [2024-07-13 20:21:49.779733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.187 [2024-07-13 20:21:49.780172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.187 [2024-07-13 20:21:49.780203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.187 [2024-07-13 20:21:49.780221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.187 [2024-07-13 20:21:49.780460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.187 [2024-07-13 20:21:49.780703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.187 [2024-07-13 20:21:49.780727] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.187 [2024-07-13 20:21:49.780743] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.187 [2024-07-13 20:21:49.784346] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.187 [2024-07-13 20:21:49.793682] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.187 [2024-07-13 20:21:49.794130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.187 [2024-07-13 20:21:49.794157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.187 [2024-07-13 20:21:49.794172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.187 [2024-07-13 20:21:49.794418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.187 [2024-07-13 20:21:49.794663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.187 [2024-07-13 20:21:49.794687] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.187 [2024-07-13 20:21:49.794709] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.187 [2024-07-13 20:21:49.798305] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.187 [2024-07-13 20:21:49.807639] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.187 [2024-07-13 20:21:49.808120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.187 [2024-07-13 20:21:49.808166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.187 [2024-07-13 20:21:49.808182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.187 [2024-07-13 20:21:49.808443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.187 [2024-07-13 20:21:49.808687] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.187 [2024-07-13 20:21:49.808711] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.187 [2024-07-13 20:21:49.808726] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.187 [2024-07-13 20:21:49.812325] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.187 [2024-07-13 20:21:49.821662] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.187 [2024-07-13 20:21:49.822098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.187 [2024-07-13 20:21:49.822128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.187 [2024-07-13 20:21:49.822146] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.187 [2024-07-13 20:21:49.822386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.187 [2024-07-13 20:21:49.822629] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.187 [2024-07-13 20:21:49.822653] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.187 [2024-07-13 20:21:49.822670] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.187 [2024-07-13 20:21:49.826265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.187 [2024-07-13 20:21:49.835645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.187 [2024-07-13 20:21:49.836084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.187 [2024-07-13 20:21:49.836117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.187 [2024-07-13 20:21:49.836135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.187 [2024-07-13 20:21:49.836375] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.187 [2024-07-13 20:21:49.836620] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.187 [2024-07-13 20:21:49.836648] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.187 [2024-07-13 20:21:49.836679] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.187 [2024-07-13 20:21:49.840411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.448 [2024-07-13 20:21:49.849709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.448 [2024-07-13 20:21:49.850192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.448 [2024-07-13 20:21:49.850224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.448 [2024-07-13 20:21:49.850243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.448 [2024-07-13 20:21:49.850482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.448 [2024-07-13 20:21:49.850726] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.448 [2024-07-13 20:21:49.850750] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.448 [2024-07-13 20:21:49.850766] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.448 [2024-07-13 20:21:49.854366] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.863693] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.864130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.864162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.864180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.864419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.864663] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.864687] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.864702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.868296] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.877617] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.878093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.878120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.878149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.878403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.878647] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.878670] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.878686] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.882282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.891604] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.892058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.892090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.892108] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.892347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.892599] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.892623] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.892639] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.896238] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.905570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.906019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.906051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.906069] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.906308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.906552] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.906575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.906591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.910187] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.919511] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.919956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.919987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.920005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.920245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.920488] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.920512] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.920528] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.924124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.933448] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.933902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.933930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.933947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.934203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.934447] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.934471] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.934487] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.938092] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.947416] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.947879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.947910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.947928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.948167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.948410] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.948434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.948450] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.952043] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.961371] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.961801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.961832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.961850] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.962099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.962343] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.962367] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.962383] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.965978] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.975302] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.975763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.975795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.975812] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.976064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.976308] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.976332] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.976348] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.979955] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:49.989282] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:49.989712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:49.989743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:49.989766] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:49.990019] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:49.990264] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:49.990288] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.449 [2024-07-13 20:21:49.990304] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.449 [2024-07-13 20:21:49.993896] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.449 [2024-07-13 20:21:50.003139] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.449 [2024-07-13 20:21:50.003529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.449 [2024-07-13 20:21:50.003572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.449 [2024-07-13 20:21:50.003588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.449 [2024-07-13 20:21:50.003832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.449 [2024-07-13 20:21:50.004078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.449 [2024-07-13 20:21:50.004100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.004115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.007290] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.016490] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.016942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.016971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.016987] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.017218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.017440] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.017461] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.017476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.020612] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.029929] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.030370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.030399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.030416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.030654] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.030882] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.030923] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.030938] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.034086] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.044013] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.044481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.044515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.044535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.044776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.045032] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.045058] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.045074] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.048665] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.058018] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.058519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.058569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.058588] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.058828] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.059082] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.059107] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.059123] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.062718] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.072057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.072550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.072601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.072619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.072858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.073113] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.073137] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.073153] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.076739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.086093] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.086544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.086573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.086589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.086845] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.087098] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.087123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.087139] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.450 [2024-07-13 20:21:50.090725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.450 [2024-07-13 20:21:50.100115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.450 [2024-07-13 20:21:50.100655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.450 [2024-07-13 20:21:50.100687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.450 [2024-07-13 20:21:50.100705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.450 [2024-07-13 20:21:50.100959] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.450 [2024-07-13 20:21:50.101250] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.450 [2024-07-13 20:21:50.101276] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.450 [2024-07-13 20:21:50.101292] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.710 [2024-07-13 20:21:50.104968] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.710 [2024-07-13 20:21:50.114047] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.710 [2024-07-13 20:21:50.114518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.710 [2024-07-13 20:21:50.114561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.710 [2024-07-13 20:21:50.114578] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.710 [2024-07-13 20:21:50.114840] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.710 [2024-07-13 20:21:50.115095] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.115119] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.115135] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.118723] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.128062] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.128523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.128554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.128572] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.128818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.129073] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.129098] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.129113] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.132703] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.142042] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.142477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.142508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.142526] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.142765] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.143023] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.143048] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.143064] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.146650] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.155984] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.156436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.156467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.156485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.156724] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.156980] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.157005] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.157021] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.160614] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.169949] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.170407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.170435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.170451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.170702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.170959] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.170983] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.171005] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.174593] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.183935] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.184406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.184437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.184455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.184694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.184950] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.184975] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.184991] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.188581] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.197924] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.198385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.198411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.198443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.198701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.198958] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.198982] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.198999] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.202587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.211928] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.212379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.212410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.212428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.212667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.212921] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.212945] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.212961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.216544] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.225876] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.226360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.226401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.226418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.226667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.226920] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.226945] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.226960] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.230549] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.239880] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.240309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.240356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.240373] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.240612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.240855] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.240888] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.240904] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.244490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.253812] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.254276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.254304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.711 [2024-07-13 20:21:50.254320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.711 [2024-07-13 20:21:50.254579] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.711 [2024-07-13 20:21:50.254823] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.711 [2024-07-13 20:21:50.254848] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.711 [2024-07-13 20:21:50.254863] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.711 [2024-07-13 20:21:50.258461] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.711 [2024-07-13 20:21:50.267793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.711 [2024-07-13 20:21:50.268246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.711 [2024-07-13 20:21:50.268274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.268290] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.268543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.268792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.268817] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.268833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.272431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.281725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.282180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.282208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.282225] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.282448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.282674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.282696] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.282711] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.286074] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.295263] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.295677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.295704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.295734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.295960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.296192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.296213] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.296227] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.299306] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.308614] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.309095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.309123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.309139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.309382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.309589] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.309609] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.309622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.312751] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.322074] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.322598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.322626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.322642] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.322907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.323127] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.323149] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.323163] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.326342] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.335571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.336014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.336042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.336059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.336302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.336545] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.336567] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.336581] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.339992] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.348902] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.349291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.349318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.349333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.349569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.349769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.349789] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.349801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.712 [2024-07-13 20:21:50.353070] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.712 [2024-07-13 20:21:50.362382] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.712 [2024-07-13 20:21:50.362862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.712 [2024-07-13 20:21:50.362897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.712 [2024-07-13 20:21:50.362919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.712 [2024-07-13 20:21:50.363166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.712 [2024-07-13 20:21:50.363402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.712 [2024-07-13 20:21:50.363422] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.712 [2024-07-13 20:21:50.363435] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.366831] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.375686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.379052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.379092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.379111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.379357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.379558] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.379578] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.379591] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.382594] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.389055] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.389575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.389605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.389621] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.389862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.390078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.390098] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.390112] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.393149] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.402403] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.402888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.402917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.402933] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.403150] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.403365] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.403390] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.403403] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.406403] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.415646] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.416087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.416116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.416132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.416396] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.416596] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.416615] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.416628] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.419664] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.428928] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.429342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.429368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.429383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.429600] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.429800] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.429819] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.429832] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.432902] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.442328] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.442757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.442784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.442800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.443041] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.443283] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.443303] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.443316] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.446312] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.455691] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.456104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.456133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.456149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.456400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.456600] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.456619] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.456632] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.459667] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.468991] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.469442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.469470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.469486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.469741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.469988] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.470010] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.470024] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.473037] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.482236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.482648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.482675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.482705] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.482976] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.974 [2024-07-13 20:21:50.483205] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.974 [2024-07-13 20:21:50.483225] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.974 [2024-07-13 20:21:50.483238] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.974 [2024-07-13 20:21:50.486233] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.974 [2024-07-13 20:21:50.495430] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.974 [2024-07-13 20:21:50.495877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.974 [2024-07-13 20:21:50.495920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.974 [2024-07-13 20:21:50.495941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.974 [2024-07-13 20:21:50.496173] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.496389] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.496409] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.496422] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.499449] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.508793] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.509288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.509317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.509333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.509582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.509798] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.509818] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.509831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.512887] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.522057] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.522564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.522592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.522608] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.522864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.523104] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.523126] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.523140] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.526137] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.535304] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.535796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.535824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.535840] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.536079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.536302] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.536322] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.536343] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.539340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.548542] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.548916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.548945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.548961] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.549203] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.549403] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.549422] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.549436] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.552472] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.561839] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.562279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.562305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.562335] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.562569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.562769] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.562788] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.562801] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.565832] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.575212] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.575654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.575681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.575711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.575978] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.576191] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.576212] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.576240] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.579235] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.588418] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.588922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.588950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.588966] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.589181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.589415] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.589437] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.589451] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.592863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.601766] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.602280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.602310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.602326] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.602569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.602785] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.602805] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.602817] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.605874] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.615005] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.615488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.615515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.615547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.615805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:02.975 [2024-07-13 20:21:50.616054] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:02.975 [2024-07-13 20:21:50.616076] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:02.975 [2024-07-13 20:21:50.616090] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:02.975 [2024-07-13 20:21:50.619088] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:02.975 [2024-07-13 20:21:50.628596] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:02.975 [2024-07-13 20:21:50.629031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:02.975 [2024-07-13 20:21:50.629061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:02.975 [2024-07-13 20:21:50.629078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:02.975 [2024-07-13 20:21:50.629308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.629509] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.629531] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.629544] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.632609] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.641831] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.642302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.642343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.235 [2024-07-13 20:21:50.642360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.235 [2024-07-13 20:21:50.642593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.642793] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.642813] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.642826] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.645849] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.655234] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.655676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.655717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.235 [2024-07-13 20:21:50.655734] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.235 [2024-07-13 20:21:50.655982] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.656195] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.656216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.656245] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.659239] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.668572] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.669043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.669071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.235 [2024-07-13 20:21:50.669087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.235 [2024-07-13 20:21:50.669343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.669543] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.669563] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.669576] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.672583] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.681936] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.682386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.682414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.235 [2024-07-13 20:21:50.682430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.235 [2024-07-13 20:21:50.682684] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.682925] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.682947] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.682961] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.686213] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.695219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.695638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.695666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.235 [2024-07-13 20:21:50.695697] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.235 [2024-07-13 20:21:50.695965] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.696193] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.696213] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.696241] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.699198] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.708426] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.708840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.708873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.235 [2024-07-13 20:21:50.708906] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.235 [2024-07-13 20:21:50.709147] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.235 [2024-07-13 20:21:50.709363] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.235 [2024-07-13 20:21:50.709383] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.235 [2024-07-13 20:21:50.709396] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.235 [2024-07-13 20:21:50.712431] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.235 [2024-07-13 20:21:50.721779] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.235 [2024-07-13 20:21:50.722209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.235 [2024-07-13 20:21:50.722241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.722257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.722494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.722695] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.722715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.722728] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.725759] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.735121] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.735535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.735563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.735579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.735834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.736068] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.736090] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.736104] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.739107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.748388] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.748879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.748908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.748924] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.749165] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.749382] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.749401] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.749414] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.752433] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.761646] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.762076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.762104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.762120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.762378] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.762582] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.762602] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.762615] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.765646] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.775001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.775464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.775506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.775523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.775764] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.776010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.776032] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.776046] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.779061] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.788250] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.788657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.788684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.788699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.788984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.789213] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.789234] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.789262] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.792257] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.801484] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.801915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.801944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.801960] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.802194] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.802410] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.802430] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.802443] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.805477] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.814889] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.815327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.815354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.815371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.815607] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.815808] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.815828] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.815856] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.818885] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.828122] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.828609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.828636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.828667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.828929] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.829136] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.829156] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.829170] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.832133] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.841379] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.841880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.236 [2024-07-13 20:21:50.841915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.236 [2024-07-13 20:21:50.841931] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.236 [2024-07-13 20:21:50.842161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.236 [2024-07-13 20:21:50.842418] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.236 [2024-07-13 20:21:50.842440] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.236 [2024-07-13 20:21:50.842454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.236 [2024-07-13 20:21:50.845871] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.236 [2024-07-13 20:21:50.854709] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.236 [2024-07-13 20:21:50.855151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.237 [2024-07-13 20:21:50.855178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.237 [2024-07-13 20:21:50.855199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.237 [2024-07-13 20:21:50.855454] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.237 [2024-07-13 20:21:50.855655] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.237 [2024-07-13 20:21:50.855674] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.237 [2024-07-13 20:21:50.855687] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.237 [2024-07-13 20:21:50.858755] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.237 [2024-07-13 20:21:50.868000] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.237 [2024-07-13 20:21:50.868452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.237 [2024-07-13 20:21:50.868480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.237 [2024-07-13 20:21:50.868496] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.237 [2024-07-13 20:21:50.868750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.237 [2024-07-13 20:21:50.869000] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.237 [2024-07-13 20:21:50.869023] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.237 [2024-07-13 20:21:50.869037] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.237 [2024-07-13 20:21:50.872067] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.237 [2024-07-13 20:21:50.881283] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.237 [2024-07-13 20:21:50.881775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.237 [2024-07-13 20:21:50.881803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.237 [2024-07-13 20:21:50.881819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.237 [2024-07-13 20:21:50.882073] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.237 [2024-07-13 20:21:50.882293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.237 [2024-07-13 20:21:50.882313] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.237 [2024-07-13 20:21:50.882326] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.237 [2024-07-13 20:21:50.885360] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.894661] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.895113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.895143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.895160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.895417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.895628] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.895649] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.895667] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.898807] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.908063] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.908580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.908623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.908640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.908907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.909121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.909142] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.909156] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.912184] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.921392] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.921833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.921883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.921901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.922143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.922358] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.922377] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.922390] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.925414] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.934706] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.935156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.935184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.935200] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.935442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.935658] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.935678] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.935691] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.938687] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.948115] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.948562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.948591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.948607] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.948861] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.949095] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.949117] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.949131] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.952080] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.961352] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.961845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.961879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.961897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.962137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.962352] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.962371] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.962385] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.965382] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.974520] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.974965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.975007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.975024] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.975266] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.975481] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.975501] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.497 [2024-07-13 20:21:50.975514] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.497 [2024-07-13 20:21:50.978520] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.497 [2024-07-13 20:21:50.987832] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.497 [2024-07-13 20:21:50.988242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.497 [2024-07-13 20:21:50.988284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.497 [2024-07-13 20:21:50.988299] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.497 [2024-07-13 20:21:50.988542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.497 [2024-07-13 20:21:50.988757] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.497 [2024-07-13 20:21:50.988777] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:50.988790] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:50.991824] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.001219] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.001629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.001656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.001672] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.001955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.002168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.002189] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.002203] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.005268] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.014621] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.015059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.015088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.015104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.015346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.015561] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.015582] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.015595] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.018603] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.027956] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.028469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.028497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.028513] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.028750] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.028995] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.029017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.029036] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.032049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.041251] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.041651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.041679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.041695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.041950] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.042185] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.042206] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.042219] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.045229] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.054634] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.055073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.055101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.055117] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.055376] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.055577] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.055596] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.055609] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.058632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.068001] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.068463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.068490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.068521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.068773] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.069019] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.069041] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.069055] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.072057] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.081374] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.081817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.081848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.081889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.082135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.082372] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.082392] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.498 [2024-07-13 20:21:51.082405] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.498 [2024-07-13 20:21:51.085399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.498 [2024-07-13 20:21:51.094767] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.498 [2024-07-13 20:21:51.095220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.498 [2024-07-13 20:21:51.095249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.498 [2024-07-13 20:21:51.095265] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.498 [2024-07-13 20:21:51.095480] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.498 [2024-07-13 20:21:51.095700] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.498 [2024-07-13 20:21:51.095721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.499 [2024-07-13 20:21:51.095735] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.499 [2024-07-13 20:21:51.099107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.499 [2024-07-13 20:21:51.108133] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.499 [2024-07-13 20:21:51.108562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.499 [2024-07-13 20:21:51.108589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.499 [2024-07-13 20:21:51.108605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.499 [2024-07-13 20:21:51.108859] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.499 [2024-07-13 20:21:51.109094] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.499 [2024-07-13 20:21:51.109116] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.499 [2024-07-13 20:21:51.109130] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.499 [2024-07-13 20:21:51.112189] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.499 [2024-07-13 20:21:51.121360] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.499 [2024-07-13 20:21:51.121803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.499 [2024-07-13 20:21:51.121830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.499 [2024-07-13 20:21:51.121861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.499 [2024-07-13 20:21:51.122101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.499 [2024-07-13 20:21:51.122324] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.499 [2024-07-13 20:21:51.122345] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.499 [2024-07-13 20:21:51.122358] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.499 [2024-07-13 20:21:51.125354] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.499 [2024-07-13 20:21:51.134727] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.499 [2024-07-13 20:21:51.135219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.499 [2024-07-13 20:21:51.135247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.499 [2024-07-13 20:21:51.135263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.499 [2024-07-13 20:21:51.135502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.499 [2024-07-13 20:21:51.135701] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.499 [2024-07-13 20:21:51.135721] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.499 [2024-07-13 20:21:51.135734] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.499 [2024-07-13 20:21:51.138754] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.499 [2024-07-13 20:21:51.148293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.499 [2024-07-13 20:21:51.148694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.499 [2024-07-13 20:21:51.148722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.499 [2024-07-13 20:21:51.148738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.499 [2024-07-13 20:21:51.148994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.499 [2024-07-13 20:21:51.149224] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.499 [2024-07-13 20:21:51.149258] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.499 [2024-07-13 20:21:51.149272] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.758 [2024-07-13 20:21:51.152533] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.758 [2024-07-13 20:21:51.161698] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.758 [2024-07-13 20:21:51.162199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.758 [2024-07-13 20:21:51.162228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.758 [2024-07-13 20:21:51.162245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.758 [2024-07-13 20:21:51.162487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.758 [2024-07-13 20:21:51.162703] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.758 [2024-07-13 20:21:51.162723] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.758 [2024-07-13 20:21:51.162736] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.758 [2024-07-13 20:21:51.165773] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.758 [2024-07-13 20:21:51.174953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.758 [2024-07-13 20:21:51.175411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.758 [2024-07-13 20:21:51.175439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.758 [2024-07-13 20:21:51.175455] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.758 [2024-07-13 20:21:51.175708] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.758 [2024-07-13 20:21:51.175952] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.758 [2024-07-13 20:21:51.175973] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.758 [2024-07-13 20:21:51.175987] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.758 [2024-07-13 20:21:51.179000] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.758 [2024-07-13 20:21:51.188196] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.758 [2024-07-13 20:21:51.188639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.758 [2024-07-13 20:21:51.188681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.758 [2024-07-13 20:21:51.188698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.758 [2024-07-13 20:21:51.188964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.758 [2024-07-13 20:21:51.189192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.758 [2024-07-13 20:21:51.189227] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.758 [2024-07-13 20:21:51.189240] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.758 [2024-07-13 20:21:51.192232] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.758 [2024-07-13 20:21:51.201395] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.758 [2024-07-13 20:21:51.201824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.758 [2024-07-13 20:21:51.201851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.758 [2024-07-13 20:21:51.201875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.758 [2024-07-13 20:21:51.202106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.758 [2024-07-13 20:21:51.202342] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.758 [2024-07-13 20:21:51.202362] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.758 [2024-07-13 20:21:51.202376] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.758 [2024-07-13 20:21:51.205378] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.758 [2024-07-13 20:21:51.214701] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.758 [2024-07-13 20:21:51.215119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.758 [2024-07-13 20:21:51.215146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.758 [2024-07-13 20:21:51.215171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.758 [2024-07-13 20:21:51.215417] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.215633] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.215652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.215665] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.218690] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.227931] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.228406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.228434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.228449] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.228686] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.228929] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.228951] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.228964] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.231981] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.241188] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.241617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.241645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.241661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.241927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.242140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.242161] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.242190] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.245205] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.254465] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.254949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.254977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.254993] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.255248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.255449] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.255472] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.255486] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.258530] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.267705] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.268137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.268179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.268195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.268465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.268666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.268685] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.268698] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.271686] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.281065] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.281506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.281533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.281549] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.281818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.282053] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.282075] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.282089] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.285082] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.294402] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.294815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.294842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.294879] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.295112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.295348] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.295368] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.295381] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.298373] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.307712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.308136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.308179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.308195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.308466] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.308666] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.308686] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.308699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.311726] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.321114] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.321575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.321603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.321619] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.321884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.322102] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.322123] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.322137] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.325252] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.334449] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.334838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.334887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.334904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.335135] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.335351] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.335370] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.335383] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.338464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.347901] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.348309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.348337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.348352] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.759 [2024-07-13 20:21:51.348573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.759 [2024-07-13 20:21:51.348792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.759 [2024-07-13 20:21:51.348814] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.759 [2024-07-13 20:21:51.348828] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.759 [2024-07-13 20:21:51.352173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.759 [2024-07-13 20:21:51.361207] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.759 [2024-07-13 20:21:51.361642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.759 [2024-07-13 20:21:51.361670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.759 [2024-07-13 20:21:51.361686] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.760 [2024-07-13 20:21:51.361954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.760 [2024-07-13 20:21:51.362189] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.760 [2024-07-13 20:21:51.362224] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.760 [2024-07-13 20:21:51.362237] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.760 [2024-07-13 20:21:51.365280] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.760 [2024-07-13 20:21:51.374668] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.760 [2024-07-13 20:21:51.375137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.760 [2024-07-13 20:21:51.375179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.760 [2024-07-13 20:21:51.375195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.760 [2024-07-13 20:21:51.375444] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.760 [2024-07-13 20:21:51.375644] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.760 [2024-07-13 20:21:51.375663] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.760 [2024-07-13 20:21:51.375676] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.760 [2024-07-13 20:21:51.378679] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.760 [2024-07-13 20:21:51.388540] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.760 [2024-07-13 20:21:51.389009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.760 [2024-07-13 20:21:51.389041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.760 [2024-07-13 20:21:51.389059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.760 [2024-07-13 20:21:51.389298] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.760 [2024-07-13 20:21:51.389541] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.760 [2024-07-13 20:21:51.389565] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.760 [2024-07-13 20:21:51.389586] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.760 [2024-07-13 20:21:51.393200] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:03.760 [2024-07-13 20:21:51.402533] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:03.760 [2024-07-13 20:21:51.402987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.760 [2024-07-13 20:21:51.403015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:03.760 [2024-07-13 20:21:51.403032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:03.760 [2024-07-13 20:21:51.403288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:03.760 [2024-07-13 20:21:51.403532] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:03.760 [2024-07-13 20:21:51.403556] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:03.760 [2024-07-13 20:21:51.403572] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:03.760 [2024-07-13 20:21:51.407173] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.020 [2024-07-13 20:21:51.416549] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.020 [2024-07-13 20:21:51.417015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.020 [2024-07-13 20:21:51.417048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.020 [2024-07-13 20:21:51.417066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.020 [2024-07-13 20:21:51.417306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.020 [2024-07-13 20:21:51.417550] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.020 [2024-07-13 20:21:51.417575] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.020 [2024-07-13 20:21:51.417592] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.020 [2024-07-13 20:21:51.421294] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.020 [2024-07-13 20:21:51.430422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.020 [2024-07-13 20:21:51.430899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.020 [2024-07-13 20:21:51.430932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.020 [2024-07-13 20:21:51.430950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.020 [2024-07-13 20:21:51.431222] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.020 [2024-07-13 20:21:51.431468] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.020 [2024-07-13 20:21:51.431493] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.020 [2024-07-13 20:21:51.431509] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.435107] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.444433] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.444841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.444886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.444907] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.445146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.445390] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.445414] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.445430] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.449025] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.458277] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.458833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.458892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.458911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.459142] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.459381] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.459402] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.459415] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.463063] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.472313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.472826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.472883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.472903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.473136] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.473376] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.473410] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.473423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.477071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.486330] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.486927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.486957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.486973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.487189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.487462] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.487487] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.487503] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.491124] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.500249] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.500707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.500735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.500767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.501018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.501253] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.501277] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.501294] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.504854] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.514102] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.514563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.514594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.514612] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.514851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.515105] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.515129] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.515145] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.518748] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.528089] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.528580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.528628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.528646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.528896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.529141] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.529165] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.529181] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.532768] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.542111] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.542607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.542634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.542649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.542915] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.021 [2024-07-13 20:21:51.543160] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.021 [2024-07-13 20:21:51.543184] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.021 [2024-07-13 20:21:51.543200] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.021 [2024-07-13 20:21:51.546788] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.021 [2024-07-13 20:21:51.556121] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.021 [2024-07-13 20:21:51.556618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.021 [2024-07-13 20:21:51.556644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.021 [2024-07-13 20:21:51.556659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.021 [2024-07-13 20:21:51.556906] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.557157] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.557181] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.557197] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.560793] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.570127] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.570622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.570649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.570680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.570948] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.571192] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.571216] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.571232] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.574817] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.584175] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.584656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.584704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.584727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.584977] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.585222] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.585246] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.585262] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.588842] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.598189] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.598610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.598641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.598659] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.598909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.599155] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.599178] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.599194] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.602787] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.612124] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.612525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.612556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.612574] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.612813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.613067] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.613092] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.613107] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.616693] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.626027] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.626460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.626487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.626503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.626748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.627010] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.627036] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.627051] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.630644] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.639979] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.640448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.640479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.640497] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.640736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.640992] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.641017] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.641032] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.644616] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.653946] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.654413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.654441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.654471] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.654720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.654975] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.655001] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.655016] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.658603] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.022 [2024-07-13 20:21:51.667940] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.022 [2024-07-13 20:21:51.668386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.022 [2024-07-13 20:21:51.668417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.022 [2024-07-13 20:21:51.668435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.022 [2024-07-13 20:21:51.668674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.022 [2024-07-13 20:21:51.668930] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.022 [2024-07-13 20:21:51.668955] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.022 [2024-07-13 20:21:51.668971] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.022 [2024-07-13 20:21:51.672632] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.283 [2024-07-13 20:21:51.682016] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.283 [2024-07-13 20:21:51.682542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.283 [2024-07-13 20:21:51.682585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.283 [2024-07-13 20:21:51.682601] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.283 [2024-07-13 20:21:51.682894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.283 [2024-07-13 20:21:51.683140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.283 [2024-07-13 20:21:51.683164] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.283 [2024-07-13 20:21:51.683180] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.283 [2024-07-13 20:21:51.686769] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.283 [2024-07-13 20:21:51.695897] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.283 [2024-07-13 20:21:51.696368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.283 [2024-07-13 20:21:51.696399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.283 [2024-07-13 20:21:51.696417] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.283 [2024-07-13 20:21:51.696656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.283 [2024-07-13 20:21:51.696911] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.283 [2024-07-13 20:21:51.696936] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.283 [2024-07-13 20:21:51.696952] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.283 [2024-07-13 20:21:51.700541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.283 [2024-07-13 20:21:51.709951] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.283 [2024-07-13 20:21:51.710386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.283 [2024-07-13 20:21:51.710417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.283 [2024-07-13 20:21:51.710435] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.283 [2024-07-13 20:21:51.710674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.283 [2024-07-13 20:21:51.710929] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.283 [2024-07-13 20:21:51.710954] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.283 [2024-07-13 20:21:51.710970] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.283 [2024-07-13 20:21:51.714558] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.283 [2024-07-13 20:21:51.723893] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.283 [2024-07-13 20:21:51.724320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.283 [2024-07-13 20:21:51.724352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.283 [2024-07-13 20:21:51.724369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.283 [2024-07-13 20:21:51.724614] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.283 [2024-07-13 20:21:51.724858] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.283 [2024-07-13 20:21:51.724894] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.283 [2024-07-13 20:21:51.724910] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.283 [2024-07-13 20:21:51.728498] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.283 [2024-07-13 20:21:51.737823] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.283 [2024-07-13 20:21:51.738283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.738314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.738332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.738570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.738813] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.738838] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.738854] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.742455] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.751786] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.752252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.752279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.752311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.752567] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.752811] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.752835] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.752851] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.756450] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.765787] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.766253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.766280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.766296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.766542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.766792] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.766816] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.766838] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.770438] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.779765] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.780218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.780249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.780266] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.780506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.780750] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.780773] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.780789] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.784390] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.793719] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.794391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.794444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.794464] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.794711] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.794970] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.794996] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.795012] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.798602] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.807725] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.808198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.808230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.808248] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.808488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.808732] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.808755] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.808771] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.812375] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.821703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.822136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.822174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.822193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.822433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.822677] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.822701] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.822717] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.826315] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.835645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.836122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.836153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.836171] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.836411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.836655] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.284 [2024-07-13 20:21:51.836679] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.284 [2024-07-13 20:21:51.836695] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.284 [2024-07-13 20:21:51.840291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.284 [2024-07-13 20:21:51.849619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.284 [2024-07-13 20:21:51.850095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.284 [2024-07-13 20:21:51.850127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.284 [2024-07-13 20:21:51.850145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.284 [2024-07-13 20:21:51.850384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.284 [2024-07-13 20:21:51.850628] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.850652] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.850668] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.854266] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.285 [2024-07-13 20:21:51.863597] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.285 [2024-07-13 20:21:51.864046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.285 [2024-07-13 20:21:51.864074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.285 [2024-07-13 20:21:51.864090] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.285 [2024-07-13 20:21:51.864346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.285 [2024-07-13 20:21:51.864597] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.864622] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.864638] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.868237] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.285 [2024-07-13 20:21:51.877571] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.285 [2024-07-13 20:21:51.878030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.285 [2024-07-13 20:21:51.878073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.285 [2024-07-13 20:21:51.878089] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.285 [2024-07-13 20:21:51.878345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.285 [2024-07-13 20:21:51.878589] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.878613] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.878629] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.882227] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.285 [2024-07-13 20:21:51.891553] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.285 [2024-07-13 20:21:51.891986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.285 [2024-07-13 20:21:51.892017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.285 [2024-07-13 20:21:51.892035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.285 [2024-07-13 20:21:51.892275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.285 [2024-07-13 20:21:51.892519] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.892542] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.892558] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.896159] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.285 [2024-07-13 20:21:51.905491] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.285 [2024-07-13 20:21:51.905918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.285 [2024-07-13 20:21:51.905949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.285 [2024-07-13 20:21:51.905967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.285 [2024-07-13 20:21:51.906206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.285 [2024-07-13 20:21:51.906451] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.906474] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.906490] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.910084] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.285 [2024-07-13 20:21:51.919422] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.285 [2024-07-13 20:21:51.919879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.285 [2024-07-13 20:21:51.919910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.285 [2024-07-13 20:21:51.919928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.285 [2024-07-13 20:21:51.920167] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.285 [2024-07-13 20:21:51.920410] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.920434] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.920450] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.924046] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.285 [2024-07-13 20:21:51.933372] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.285 [2024-07-13 20:21:51.933816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.285 [2024-07-13 20:21:51.933843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.285 [2024-07-13 20:21:51.933897] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.285 [2024-07-13 20:21:51.934176] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.285 [2024-07-13 20:21:51.934422] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.285 [2024-07-13 20:21:51.934446] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.285 [2024-07-13 20:21:51.934462] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.285 [2024-07-13 20:21:51.938153] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.547 [2024-07-13 20:21:51.947483] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.547 [2024-07-13 20:21:51.947943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.547 [2024-07-13 20:21:51.947977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.547 [2024-07-13 20:21:51.947996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.547 [2024-07-13 20:21:51.948236] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.547 [2024-07-13 20:21:51.948479] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.547 [2024-07-13 20:21:51.948503] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.547 [2024-07-13 20:21:51.948519] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.547 [2024-07-13 20:21:51.952121] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.547 [2024-07-13 20:21:51.961457] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.547 [2024-07-13 20:21:51.961893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.547 [2024-07-13 20:21:51.961926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.547 [2024-07-13 20:21:51.961951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.547 [2024-07-13 20:21:51.962191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.547 [2024-07-13 20:21:51.962435] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.547 [2024-07-13 20:21:51.962459] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.547 [2024-07-13 20:21:51.962476] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.547 [2024-07-13 20:21:51.966071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.547 [2024-07-13 20:21:51.975410] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.547 [2024-07-13 20:21:51.975839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.547 [2024-07-13 20:21:51.975890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.547 [2024-07-13 20:21:51.975921] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.547 [2024-07-13 20:21:51.976182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.547 [2024-07-13 20:21:51.976426] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.547 [2024-07-13 20:21:51.976450] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.547 [2024-07-13 20:21:51.976466] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.547 [2024-07-13 20:21:51.980064] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.547 [2024-07-13 20:21:51.989399] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.547 [2024-07-13 20:21:51.989964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.547 [2024-07-13 20:21:51.989995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.547 [2024-07-13 20:21:51.990013] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.547 [2024-07-13 20:21:51.990252] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.547 [2024-07-13 20:21:51.990495] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.547 [2024-07-13 20:21:51.990519] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.547 [2024-07-13 20:21:51.990535] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.547 [2024-07-13 20:21:51.994131] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.547 [2024-07-13 20:21:52.003267] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.003791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.003839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.003857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.004107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.004351] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.004381] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.004398] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.008013] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.017137] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.017697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.017750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.017768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.018021] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.018266] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.018290] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.018305] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.021908] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.031044] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.031472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.031503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.031522] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.031761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.032015] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.032041] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.032057] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.035640] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.044969] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.045419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.045450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.045467] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.045707] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.045961] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.045986] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.046002] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.049585] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.058917] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.059346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.059377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.059395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.059634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.059892] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.059917] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.059934] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.063519] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.072846] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.073304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.073335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.073353] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.073593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.073837] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.073860] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.073889] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.077481] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.086824] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.087284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.087315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.087333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.087572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.087816] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.087840] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.087855] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.091478] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.100808] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.101270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.101298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.101328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.101580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.101825] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.101849] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.548 [2024-07-13 20:21:52.101874] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.548 [2024-07-13 20:21:52.105464] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.548 [2024-07-13 20:21:52.114801] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.548 [2024-07-13 20:21:52.115259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.548 [2024-07-13 20:21:52.115290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.548 [2024-07-13 20:21:52.115308] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.548 [2024-07-13 20:21:52.115548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.548 [2024-07-13 20:21:52.115791] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.548 [2024-07-13 20:21:52.115815] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.115831] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.549 [2024-07-13 20:21:52.119426] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.549 [2024-07-13 20:21:52.128759] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.549 [2024-07-13 20:21:52.129234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.549 [2024-07-13 20:21:52.129276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.549 [2024-07-13 20:21:52.129293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.549 [2024-07-13 20:21:52.129542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.549 [2024-07-13 20:21:52.129786] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.549 [2024-07-13 20:21:52.129809] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.129825] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.549 [2024-07-13 20:21:52.133421] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.549 [2024-07-13 20:21:52.142748] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.549 [2024-07-13 20:21:52.143155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.549 [2024-07-13 20:21:52.143185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.549 [2024-07-13 20:21:52.143203] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.549 [2024-07-13 20:21:52.143442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.549 [2024-07-13 20:21:52.143686] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.549 [2024-07-13 20:21:52.143710] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.143731] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.549 [2024-07-13 20:21:52.147329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.549 [2024-07-13 20:21:52.156674] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.549 [2024-07-13 20:21:52.157130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.549 [2024-07-13 20:21:52.157162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.549 [2024-07-13 20:21:52.157180] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.549 [2024-07-13 20:21:52.157419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.549 [2024-07-13 20:21:52.157664] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.549 [2024-07-13 20:21:52.157687] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.157703] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.549 [2024-07-13 20:21:52.161304] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.549 [2024-07-13 20:21:52.170624] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.549 [2024-07-13 20:21:52.171141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.549 [2024-07-13 20:21:52.171183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.549 [2024-07-13 20:21:52.171199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.549 [2024-07-13 20:21:52.171460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.549 [2024-07-13 20:21:52.171704] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.549 [2024-07-13 20:21:52.171728] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.171743] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.549 [2024-07-13 20:21:52.175340] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.549 [2024-07-13 20:21:52.184669] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.549 [2024-07-13 20:21:52.185121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.549 [2024-07-13 20:21:52.185152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.549 [2024-07-13 20:21:52.185170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.549 [2024-07-13 20:21:52.185409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.549 [2024-07-13 20:21:52.185653] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.549 [2024-07-13 20:21:52.185676] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.185692] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.549 [2024-07-13 20:21:52.189288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.549 [2024-07-13 20:21:52.198686] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.549 [2024-07-13 20:21:52.199206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.549 [2024-07-13 20:21:52.199245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.549 [2024-07-13 20:21:52.199264] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.549 [2024-07-13 20:21:52.199505] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.549 [2024-07-13 20:21:52.199749] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.549 [2024-07-13 20:21:52.199773] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.549 [2024-07-13 20:21:52.199789] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.809 [2024-07-13 20:21:52.203497] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.809 [2024-07-13 20:21:52.212749] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.809 [2024-07-13 20:21:52.213222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.809 [2024-07-13 20:21:52.213254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.809 [2024-07-13 20:21:52.213273] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.809 [2024-07-13 20:21:52.213512] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.809 [2024-07-13 20:21:52.213756] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.809 [2024-07-13 20:21:52.213780] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.809 [2024-07-13 20:21:52.213795] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.809 [2024-07-13 20:21:52.217393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.809 [2024-07-13 20:21:52.226715] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.809 [2024-07-13 20:21:52.227193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.809 [2024-07-13 20:21:52.227235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.809 [2024-07-13 20:21:52.227252] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.809 [2024-07-13 20:21:52.227502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.809 [2024-07-13 20:21:52.227746] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.809 [2024-07-13 20:21:52.227770] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.809 [2024-07-13 20:21:52.227786] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.809 [2024-07-13 20:21:52.231386] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.809 [2024-07-13 20:21:52.240712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.809 [2024-07-13 20:21:52.241158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.809 [2024-07-13 20:21:52.241189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.809 [2024-07-13 20:21:52.241207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.809 [2024-07-13 20:21:52.241447] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.809 [2024-07-13 20:21:52.241697] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.809 [2024-07-13 20:21:52.241720] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.809 [2024-07-13 20:21:52.241737] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.809 [2024-07-13 20:21:52.245334] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.809 [2024-07-13 20:21:52.254663] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.809 [2024-07-13 20:21:52.255127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.809 [2024-07-13 20:21:52.255158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.809 [2024-07-13 20:21:52.255177] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.809 [2024-07-13 20:21:52.255416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.809 [2024-07-13 20:21:52.255660] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.809 [2024-07-13 20:21:52.255683] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.809 [2024-07-13 20:21:52.255699] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.809 [2024-07-13 20:21:52.259297] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.809 [2024-07-13 20:21:52.268632] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.809 [2024-07-13 20:21:52.269094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.809 [2024-07-13 20:21:52.269122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.809 [2024-07-13 20:21:52.269152] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.809 [2024-07-13 20:21:52.269409] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.809 [2024-07-13 20:21:52.269654] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.809 [2024-07-13 20:21:52.269678] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.809 [2024-07-13 20:21:52.269694] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.809 [2024-07-13 20:21:52.273291] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.809 [2024-07-13 20:21:52.282619] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.809 [2024-07-13 20:21:52.283050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.809 [2024-07-13 20:21:52.283082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.809 [2024-07-13 20:21:52.283099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.809 [2024-07-13 20:21:52.283339] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.283583] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.283607] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.283622] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.287223] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 [2024-07-13 20:21:52.296552] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.297133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.297190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.297208] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.297448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.297692] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.297715] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.297732] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.301329] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 [2024-07-13 20:21:52.310447] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.310909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.310940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.310958] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.311197] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.311441] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.311464] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.311480] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.315078] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3348828 Killed "${NVMF_APP[@]}" "$@" 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3349775 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3349775 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 3349775 ']' 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:04.810 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:04.810 [2024-07-13 20:21:52.324414] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.324838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.324879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.324900] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.325140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.325384] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.325408] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.325423] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.329018] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 [2024-07-13 20:21:52.338345] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.338762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.338793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.338810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.339057] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.339314] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.339337] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.339352] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.342806] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 [2024-07-13 20:21:52.351890] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.352274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.352316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.352332] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.352575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.352821] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.352844] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.352858] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.356265] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 [2024-07-13 20:21:52.365388] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.365772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.365798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.365814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.366071] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.366115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:04.810 [2024-07-13 20:21:52.366189] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.810 [2024-07-13 20:21:52.366293] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.366312] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.366325] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.810 [2024-07-13 20:21:52.369439] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.810 [2024-07-13 20:21:52.378964] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.810 [2024-07-13 20:21:52.379391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.810 [2024-07-13 20:21:52.379432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.810 [2024-07-13 20:21:52.379448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.810 [2024-07-13 20:21:52.379685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.810 [2024-07-13 20:21:52.379927] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.810 [2024-07-13 20:21:52.379949] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.810 [2024-07-13 20:21:52.379963] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.383022] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.811 [2024-07-13 20:21:52.392540] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.811 [2024-07-13 20:21:52.392972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.811 [2024-07-13 20:21:52.393001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.811 [2024-07-13 20:21:52.393017] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.811 [2024-07-13 20:21:52.393259] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.811 [2024-07-13 20:21:52.393460] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.811 [2024-07-13 20:21:52.393479] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.811 [2024-07-13 20:21:52.393492] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.396752] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.811 EAL: No free 2048 kB hugepages reported on node 1 00:34:04.811 [2024-07-13 20:21:52.406174] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.811 [2024-07-13 20:21:52.406579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.811 [2024-07-13 20:21:52.406606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.811 [2024-07-13 20:21:52.406622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.811 [2024-07-13 20:21:52.406903] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.811 [2024-07-13 20:21:52.407140] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.811 [2024-07-13 20:21:52.407175] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.811 [2024-07-13 20:21:52.407189] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.410734] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.811 [2024-07-13 20:21:52.419703] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.811 [2024-07-13 20:21:52.420122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.811 [2024-07-13 20:21:52.420150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.811 [2024-07-13 20:21:52.420167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.811 [2024-07-13 20:21:52.420410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.811 [2024-07-13 20:21:52.420616] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.811 [2024-07-13 20:21:52.420637] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.811 [2024-07-13 20:21:52.420650] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.423839] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.811 [2024-07-13 20:21:52.433286] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.811 [2024-07-13 20:21:52.433686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.811 [2024-07-13 20:21:52.433715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.811 [2024-07-13 20:21:52.433732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.811 [2024-07-13 20:21:52.433974] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.811 [2024-07-13 20:21:52.434204] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.811 [2024-07-13 20:21:52.434225] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.811 [2024-07-13 20:21:52.434239] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.436873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:04.811 [2024-07-13 20:21:52.437399] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.811 [2024-07-13 20:21:52.446755] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.811 [2024-07-13 20:21:52.447368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.811 [2024-07-13 20:21:52.447406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.811 [2024-07-13 20:21:52.447425] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.811 [2024-07-13 20:21:52.447674] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.811 [2024-07-13 20:21:52.447914] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.811 [2024-07-13 20:21:52.447937] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.811 [2024-07-13 20:21:52.447965] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.451076] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:04.811 [2024-07-13 20:21:52.460293] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:04.811 [2024-07-13 20:21:52.460804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.811 [2024-07-13 20:21:52.460838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:04.811 [2024-07-13 20:21:52.460857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:04.811 [2024-07-13 20:21:52.461099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:04.811 [2024-07-13 20:21:52.461327] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:04.811 [2024-07-13 20:21:52.461348] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:04.811 [2024-07-13 20:21:52.461364] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:04.811 [2024-07-13 20:21:52.464725] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.072 [2024-07-13 20:21:52.473829] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.072 [2024-07-13 20:21:52.474304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 20:21:52.474334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.072 [2024-07-13 20:21:52.474351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.072 [2024-07-13 20:21:52.474596] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.072 [2024-07-13 20:21:52.474803] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.072 [2024-07-13 20:21:52.474824] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.072 [2024-07-13 20:21:52.474839] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.072 [2024-07-13 20:21:52.478015] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.072 [2024-07-13 20:21:52.487201] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.072 [2024-07-13 20:21:52.487741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 20:21:52.487779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.072 [2024-07-13 20:21:52.487799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.072 [2024-07-13 20:21:52.488031] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.072 [2024-07-13 20:21:52.488281] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.072 [2024-07-13 20:21:52.488303] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.072 [2024-07-13 20:21:52.488320] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.072 [2024-07-13 20:21:52.491417] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.072 [2024-07-13 20:21:52.500636] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.072 [2024-07-13 20:21:52.501175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 20:21:52.501209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.072 [2024-07-13 20:21:52.501229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.072 [2024-07-13 20:21:52.501476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.072 [2024-07-13 20:21:52.501685] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.072 [2024-07-13 20:21:52.501706] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.072 [2024-07-13 20:21:52.501721] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.072 [2024-07-13 20:21:52.504858] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.072 [2024-07-13 20:21:52.514055] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.072 [2024-07-13 20:21:52.514518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 20:21:52.514547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.072 [2024-07-13 20:21:52.514563] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.072 [2024-07-13 20:21:52.514808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.072 [2024-07-13 20:21:52.515045] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.072 [2024-07-13 20:21:52.515068] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.072 [2024-07-13 20:21:52.515082] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.072 [2024-07-13 20:21:52.518214] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.072 [2024-07-13 20:21:52.524240] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:05.072 [2024-07-13 20:21:52.524270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:05.072 [2024-07-13 20:21:52.524298] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:05.072 [2024-07-13 20:21:52.524311] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:05.072 [2024-07-13 20:21:52.524321] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:05.072 [2024-07-13 20:21:52.524373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:05.072 [2024-07-13 20:21:52.524438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:05.072 [2024-07-13 20:21:52.524441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.072 [2024-07-13 20:21:52.527700] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.072 [2024-07-13 20:21:52.528179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 20:21:52.528211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.072 [2024-07-13 20:21:52.528229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.072 [2024-07-13 20:21:52.528452] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.528674] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.528696] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.528720] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.532003] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.541308] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.541898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.541935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.541955] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.542179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.542402] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.542436] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.542454] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.545815] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.554987] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.555576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.555615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.555635] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.555890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.556116] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.556139] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.556157] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.559480] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.568645] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.569221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.569262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.569281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.569506] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.569730] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.569753] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.569770] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.573099] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.582367] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.582928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.582965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.582985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.583210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.583434] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.583456] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.583473] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.586764] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.596072] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.596603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.596642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.596662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.596897] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.597121] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.597144] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.597161] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.600448] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.609722] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.610204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.610249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.610267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.610488] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.610709] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.610731] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.610758] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.614058] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 [2024-07-13 20:21:52.623236] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.623665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.623693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.073 [2024-07-13 20:21:52.623709] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.073 [2024-07-13 20:21:52.623936] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.073 [2024-07-13 20:21:52.624168] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.073 [2024-07-13 20:21:52.624190] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.073 [2024-07-13 20:21:52.624205] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.073 [2024-07-13 20:21:52.627436] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.073 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:05.073 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:34:05.073 20:21:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:05.073 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:05.073 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.073 [2024-07-13 20:21:52.636781] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.073 [2024-07-13 20:21:52.637185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 20:21:52.637214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.637230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.637446] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.637665] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.637687] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.074 [2024-07-13 20:21:52.637702] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.074 [2024-07-13 20:21:52.640977] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.074 [2024-07-13 20:21:52.650436] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.074 [2024-07-13 20:21:52.650824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 20:21:52.650852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.650876] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.651094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.651315] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.651338] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.074 [2024-07-13 20:21:52.651353] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.074 [2024-07-13 20:21:52.654638] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.074 [2024-07-13 20:21:52.663117] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:05.074 [2024-07-13 20:21:52.664158] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.074 [2024-07-13 20:21:52.664590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 20:21:52.664618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.664634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.664849] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.665078] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.665100] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.074 [2024-07-13 20:21:52.665115] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.074 [2024-07-13 20:21:52.668506] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.074 [2024-07-13 20:21:52.677679] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.074 [2024-07-13 20:21:52.678113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 20:21:52.678146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.678163] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.678393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.678606] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.678627] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.074 [2024-07-13 20:21:52.678641] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.074 [2024-07-13 20:21:52.681873] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.074 [2024-07-13 20:21:52.691313] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.074 [2024-07-13 20:21:52.691803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 20:21:52.691831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.691848] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.692079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.692299] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.692321] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.074 [2024-07-13 20:21:52.692336] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.074 [2024-07-13 20:21:52.695636] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.074 [2024-07-13 20:21:52.704953] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.074 [2024-07-13 20:21:52.705501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 20:21:52.705552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.705573] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.705801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.706033] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.706057] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.074 [2024-07-13 20:21:52.706074] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.074 [2024-07-13 20:21:52.709365] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.074 Malloc0 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.074 [2024-07-13 20:21:52.718676] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.074 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.074 [2024-07-13 20:21:52.719059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 20:21:52.719088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x88ae70 with addr=10.0.0.2, port=4420 00:34:05.074 [2024-07-13 20:21:52.719104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x88ae70 is same with the state(5) to be set 00:34:05.074 [2024-07-13 20:21:52.719320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x88ae70 (9): Bad file descriptor 00:34:05.074 [2024-07-13 20:21:52.719540] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:05.074 [2024-07-13 20:21:52.719562] nvme_ctrlr.c:1751:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:05.075 [2024-07-13 20:21:52.719577] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:05.075 [2024-07-13 20:21:52.722938] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:05.075 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.075 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:05.075 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:05.075 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:05.334 [2024-07-13 20:21:52.730593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:05.334 [2024-07-13 20:21:52.732265] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:05.334 20:21:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:05.334 20:21:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3349118 00:34:05.334 [2024-07-13 20:21:52.768642] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:15.320 00:34:15.320 Latency(us) 00:34:15.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:15.320 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:15.320 Verification LBA range: start 0x0 length 0x4000 00:34:15.320 Nvme1n1 : 15.01 6878.41 26.87 8932.41 0.00 8070.62 1061.93 24272.59 00:34:15.320 =================================================================================================================== 00:34:15.320 Total : 6878.41 26.87 8932.41 0.00 8070.62 1061.93 24272.59 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:15.320 rmmod nvme_tcp 00:34:15.320 rmmod nvme_fabrics 00:34:15.320 rmmod nvme_keyring 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3349775 ']' 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3349775 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 3349775 ']' 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 3349775 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:15.320 20:22:01 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3349775 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3349775' 00:34:15.320 killing process with pid 3349775 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 3349775 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 3349775 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:15.320 20:22:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.736 20:22:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:16.736 00:34:16.736 real 0m22.191s 00:34:16.736 user 0m59.837s 00:34:16.736 sys 0m4.107s 00:34:16.736 20:22:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:16.736 20:22:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:16.736 ************************************ 00:34:16.736 END TEST nvmf_bdevperf 00:34:16.736 ************************************ 00:34:16.736 20:22:04 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:16.736 20:22:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:16.736 20:22:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:16.736 20:22:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.736 ************************************ 00:34:16.736 START TEST nvmf_target_disconnect 00:34:16.736 ************************************ 00:34:16.736 20:22:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:16.736 * Looking for test storage... 00:34:16.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:16.736 20:22:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.736 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:16.736 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.736 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.737 20:22:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:16.999 20:22:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:17.000 20:22:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:18.900 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:18.900 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:18.900 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:18.901 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:18.901 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:18.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:18.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:34:18.901 00:34:18.901 --- 10.0.0.2 ping statistics --- 00:34:18.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.901 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:18.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:18.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:18.901 00:34:18.901 --- 10.0.0.1 ping statistics --- 00:34:18.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:18.901 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:18.901 ************************************ 00:34:18.901 START TEST nvmf_target_disconnect_tc1 00:34:18.901 ************************************ 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:18.901 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:19.159 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.159 [2024-07-13 20:22:06.600904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.159 [2024-07-13 20:22:06.600974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1615520 with addr=10.0.0.2, port=4420 00:34:19.159 [2024-07-13 20:22:06.601013] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:19.159 [2024-07-13 20:22:06.601038] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:19.159 [2024-07-13 20:22:06.601053] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:19.159 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:19.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:19.159 Initializing NVMe Controllers 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:19.159 00:34:19.159 real 0m0.092s 00:34:19.159 user 0m0.037s 00:34:19.159 sys 0m0.054s 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:19.159 ************************************ 00:34:19.159 END TEST nvmf_target_disconnect_tc1 00:34:19.159 ************************************ 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:19.159 ************************************ 00:34:19.159 START TEST nvmf_target_disconnect_tc2 00:34:19.159 ************************************ 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3352924 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3352924 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3352924 ']' 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:19.159 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.160 [2024-07-13 20:22:06.708281] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:19.160 [2024-07-13 20:22:06.708367] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.160 EAL: No free 2048 kB hugepages reported on node 1 00:34:19.160 [2024-07-13 20:22:06.772732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:19.440 [2024-07-13 20:22:06.859406] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.440 [2024-07-13 20:22:06.859460] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.440 [2024-07-13 20:22:06.859487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.440 [2024-07-13 20:22:06.859499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.440 [2024-07-13 20:22:06.859509] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.440 [2024-07-13 20:22:06.859593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:19.440 [2024-07-13 20:22:06.859658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:19.440 [2024-07-13 20:22:06.859722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:19.440 [2024-07-13 20:22:06.859724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.440 20:22:06 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 Malloc0 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 [2024-07-13 20:22:07.019794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 [2024-07-13 20:22:07.048077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3352951 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:19.440 20:22:07 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:19.699 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.618 20:22:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3352924 00:34:21.618 20:22:09 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 [2024-07-13 20:22:09.076358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 [2024-07-13 20:22:09.076706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 [2024-07-13 20:22:09.077134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Read completed with error (sct=0, sc=8) 00:34:21.618 starting I/O failed 00:34:21.618 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Write completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 Read completed with error (sct=0, sc=8) 00:34:21.619 starting I/O failed 00:34:21.619 [2024-07-13 20:22:09.077467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:21.619 [2024-07-13 20:22:09.077777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.077811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.078012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.078046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.078221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.078248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.078438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.078465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.078657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.078687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.078888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.078914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.079070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.079095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.079245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.079271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.079446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.079472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.079622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.079647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.079826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.079856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.080041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.080067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.080272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.080301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.080463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.080490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.080726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.080755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.080972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.080999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.081171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.081196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.081404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.081429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.081644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.081673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.081839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.081872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.082032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.082058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.082233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.082258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.082433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.082459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.082601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.082628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.082798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.082824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.082993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.083020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.083164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.083190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.083324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.083349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.083572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.083601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.083809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.083835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.084012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.084040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.084178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.084204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.084350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.084376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.619 [2024-07-13 20:22:09.084541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.619 [2024-07-13 20:22:09.084566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.619 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.084760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.084786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.084952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.084978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.085144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.085170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.085370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.085398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.085610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.085638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.085809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.085835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.085982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.086009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.086149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.086193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.086405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.086431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.086592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.086619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.086930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.086970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.087148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.087182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.087382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.087408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.087654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.087680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.087878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.087922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.088068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.088094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.088258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.088283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.088451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.088478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.088722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.088747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.088982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.089008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.089192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.089220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.089390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.089415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.089589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.089614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.089754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.089779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.089938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.089965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.090136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.090161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.090348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.090376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.090624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.090649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.090861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.090893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.091060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.091085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.091260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.091285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.091428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.091455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.091618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.091643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.091813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.091839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.092066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.092109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.092336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.092363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.092555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.092581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.092748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.092775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.092943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.092972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.093196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.620 [2024-07-13 20:22:09.093222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.620 qpair failed and we were unable to recover it. 00:34:21.620 [2024-07-13 20:22:09.093365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.093393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.093563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.093589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.093760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.093787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.093985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.094012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.094162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.094187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.094358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.094383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.094551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.094576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.094720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.094751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.094927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.094953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.095122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.095147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.095311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.095336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.095580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.095606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.095829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.095859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.096034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.096061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.096243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.096269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.096412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.096437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.096608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.096634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.096832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.096858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.097065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.097091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.097280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.097309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.097502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.097528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.097692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.097720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.097934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.097961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.098131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.098157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.098381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.098428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.098622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.098648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.098848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.098879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.099075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.099104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.099297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.099322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.099474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.099499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.099677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.099703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.099888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.099915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.100056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.100083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.100278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.100304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.100499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.100526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.100693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.100719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.100871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.100898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.101069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.101095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.101252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.101278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.101472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.621 [2024-07-13 20:22:09.101497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.621 qpair failed and we were unable to recover it. 00:34:21.621 [2024-07-13 20:22:09.101670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.101698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.101890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.101915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.102059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.102085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.102273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.102301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.102486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.102512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.102681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.102707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.102918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.102947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.103139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.103169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.103364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.103390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.103551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.103576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.103745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.103771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.103961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.103990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.104178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.104204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.104395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.104421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.104583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.104608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.104778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.104803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.104967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.104993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.105143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.105169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.105362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.105387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.105579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.105604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.105807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.105839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.106040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.106066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.106214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.106239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.106382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.106407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.106554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.106579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.106773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.106798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.106957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.106987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.107162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.107187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.107372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.107397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.107584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.107617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.107888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.107930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.108101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.108126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.622 qpair failed and we were unable to recover it. 00:34:21.622 [2024-07-13 20:22:09.108322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.622 [2024-07-13 20:22:09.108367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.108571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.108596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.108771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.108796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.108986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.109014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.109219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.109244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.109411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.109437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.109583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.109608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.109774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.109799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.109944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.109972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.110117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.110143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.110335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.110360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.110501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.110527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.110660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.110685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.110902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.110931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.111126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.111152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.111367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.111398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.111553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.111583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.111779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.111804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.112001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.112027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.112160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.112203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.112370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.112395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.112533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.112558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.112728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.112753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.112891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.112917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.113088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.113113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.113317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.113343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.113504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.113529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.113720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.113745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.113937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.113966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.114163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.114189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.114359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.114384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.114570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.114598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.114760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.114785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.114923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.114967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.115128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.115153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.115319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.115346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.115496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.115522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.115712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.115740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.115933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.115960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.116123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.116151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.116313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.623 [2024-07-13 20:22:09.116339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.623 qpair failed and we were unable to recover it. 00:34:21.623 [2024-07-13 20:22:09.116502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.116528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.116729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.116773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.116966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.116998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.117196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.117222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.117416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.117442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.117633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.117661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.117822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.117848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.117994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.118020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.118187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.118229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.118417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.118444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.118646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.118671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.118893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.118923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.119086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.119111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.119327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.119360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.119562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.119595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.119782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.119807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.119957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.119982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.120173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.120198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.120404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.120430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.120569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.120594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.120729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.120755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.120957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.120982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.121116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.121141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.121343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.121371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.121561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.121586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.121789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.121814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.121967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.121994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.122187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.122212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.122391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.122438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.122645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.122671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.122854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.122887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.123083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.123108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.123297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.123325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.123512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.123537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.123682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.123723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.123936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.123964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.124119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.124145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.124314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.124339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.124506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.124531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.124680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.124706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:21.624 qpair failed and we were unable to recover it. 00:34:21.624 [2024-07-13 20:22:09.124879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.624 [2024-07-13 20:22:09.124918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.125131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.125174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.125350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.125376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.125548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.125574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.125806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.125831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.126035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.126062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.126225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.126250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.126389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.126414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.126584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.126609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.126823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.126851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.127019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.127044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.127214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.127239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.127383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.127407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.127551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.127576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.127721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.127745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.127955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.127981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.128146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.128171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.128368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.128393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.128569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.128614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.128800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.128827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.128992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.129018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.129194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.129222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.129410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.129435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.129629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.129654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.129848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.129881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.130035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.130060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.130201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.130226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.130392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.130417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.130609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.130659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.130822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.130846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.131018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.131043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.131235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.131264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.131451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.131476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.131636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.131660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.131847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.131882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.132071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.132096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.132227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.132252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.132484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.132510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.132673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.132698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.132879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.132905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.133073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.133098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.625 [2024-07-13 20:22:09.133236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.625 [2024-07-13 20:22:09.133261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.625 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.133409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.133434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.133643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.133671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.133844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.133880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.134062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.134087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.134286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.134311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.134480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.134505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.134652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.134696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.134879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.134907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.135067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.135092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.135273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.135301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.135504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.135553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.135764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.135789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.135948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.135976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.136195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.136224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.136391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.136416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.136607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.136652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.136878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.136904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.137068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.137093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.137288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.137316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.137494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.137519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.137712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.137737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.137904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.137932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.138146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.138171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.138335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.138359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.138528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.138553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.138719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.138744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.138916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.138941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.139132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.139159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.139345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.139391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.139582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.139607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.139751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.139776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.139945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.139972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.140134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.140159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.140356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.140381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.626 [2024-07-13 20:22:09.140519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.626 [2024-07-13 20:22:09.140543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.626 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.140716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.140741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.140931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.140960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.141116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.141143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.141329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.141354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.141517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.141560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.141769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.141800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.142004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.142030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.142178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.142204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.142391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.142418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.142613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.142638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.142819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.142846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.143029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.143057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.143248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.143273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.143436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.143463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.143674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.143702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.143888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.143914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.144079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.144104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.144256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.144280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.144439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.144464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.144604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.144647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.144847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.144876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.145071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.145096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.145286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.145313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.145522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.145550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.145730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.145755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.145942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.145969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.146125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.146153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.146343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.146367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.146531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.146556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.146743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.146770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.146956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.146981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.147162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.147189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.147350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.147377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.147574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.147599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.147784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.147811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.147998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.148024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.148204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.148228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.148400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.148425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.148609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.148636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.148825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.148849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.627 qpair failed and we were unable to recover it. 00:34:21.627 [2024-07-13 20:22:09.149086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.627 [2024-07-13 20:22:09.149114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.149261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.149288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.149449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.149473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.149643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.149682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.149890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.149914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.150084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.150108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.150298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.150324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.150503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.150529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.150713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.150736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.150947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.150974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.151193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.151217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.151407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.151430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.151595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.151622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.151814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.151838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.152056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.152082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.152240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.152265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.152433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.152457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.152612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.152637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.152778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.152802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.152964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.152990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.153129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.153155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.153328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.153353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.153517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.153542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.153706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.153732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.153927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.153959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.154122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.154147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.154284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.154309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.154484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.154509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.154674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.154698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.154856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.154897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.155105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.155130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.155309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.155334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.155505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.155529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.155693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.155722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.155916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.155942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.156100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.156124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.156254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.156278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.156443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.156467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.156628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.156653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.156823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.156847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.156997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.628 [2024-07-13 20:22:09.157021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.628 qpair failed and we were unable to recover it. 00:34:21.628 [2024-07-13 20:22:09.157165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.157189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.157359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.157384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.157528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.157552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.157719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.157744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.157914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.157941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.158135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.158160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.158315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.158340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.158503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.158528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.158670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.158695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.158839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.158864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.159012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.159037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.159206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.159231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.159398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.159423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.159588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.159613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.159780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.159805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.159972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.159998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.160136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.160161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.160323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.160350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.160532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.160556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.160700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.160729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.160901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.160927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.161096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.161121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.161257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.161282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.161452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.161477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.161610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.161634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.161833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.161858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.162056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.162082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.162259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.162284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.162475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.162500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.162668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.162693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.162875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.162900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.163074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.163099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.163259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.163284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.163430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.163455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.163627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.163652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.163804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.163828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.163995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.164021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.164221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.164246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.164379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.164404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.164580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.164605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.164733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.164758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.629 qpair failed and we were unable to recover it. 00:34:21.629 [2024-07-13 20:22:09.164925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.629 [2024-07-13 20:22:09.164951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.165103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.165128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.165295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.165320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.165461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.165485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.165613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.165638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.165772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.165801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.165965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.165991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.166125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.166150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.166308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.166333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.166509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.166533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.166667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.166692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.166858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.166889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.167099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.167127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.167293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.167318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.167477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.167518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.167700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.167728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.167895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.167938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.168106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.168131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.168270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.168295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.168438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.168463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.168626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.168651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.168813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.168837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.168990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.169016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.169184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.169209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.169379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.169404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.169540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.169565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.169728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.169752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.169947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.169973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.170117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.170142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.170313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.170338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.170479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.170504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.170670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.170695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.170839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.170870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.171037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.171063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.171230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.171255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.171424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.171449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.171604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.171630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.171821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.171847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.171997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.172022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.172187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.172212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.172378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.172402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.630 qpair failed and we were unable to recover it. 00:34:21.630 [2024-07-13 20:22:09.172570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.630 [2024-07-13 20:22:09.172596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.172732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.172757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.172895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.172921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.173082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.173107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.173238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.173263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.173403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.173432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.173577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.173602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.173737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.173761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.173924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.173950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.174176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.174204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.174352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.174379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.174535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.174559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.174752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.174777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.174977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.175002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.175143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.175168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.175314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.175339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.175507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.175531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.175702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.175727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.175930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.175955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.176101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.176128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.176267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.176292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.176435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.176460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.176656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.176683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.176870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.176896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.177035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.177060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.177285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.177312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.177465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.177489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.177669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.177694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.177838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.177862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.178026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.178053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.178190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.178215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.178387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.178429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.178639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.178668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.178860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.178892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.179061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.179087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.631 [2024-07-13 20:22:09.179258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.631 [2024-07-13 20:22:09.179284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.631 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.179445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.179470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.179633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.179660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.179846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.179877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.180024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.180050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.180217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.180242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.180409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.180434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.180567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.180591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.180769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.180811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.181008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.181034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.181166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.181191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.181336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.181361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.181530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.181555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.181698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.181723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.181883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.181909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.182077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.182102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.182260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.182285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.182430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.182455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.182590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.182614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.182748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.182772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.182937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.182963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.183106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.183131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.183299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.183327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.183517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.183541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.183678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.183707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.183879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.183917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.184142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.184170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.184356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.184381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.184597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.184625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.184782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.184809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.184981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.185007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.185201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.185226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.185421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.185448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.185630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.185655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.185825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.185849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.186027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.186052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.186196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.186221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.186371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.186401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.186587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.186615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.186804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.186829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.187018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.187046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.632 [2024-07-13 20:22:09.187241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.632 [2024-07-13 20:22:09.187267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.632 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.187439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.187465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.187681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.187709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.187859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.187893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.188053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.188078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.188273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.188302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.188479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.188507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.188698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.188723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.188908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.188937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.189155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.189183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.189375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.189400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.189622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.189649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.189812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.189840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.190040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.190067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.190230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.190258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.190407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.190434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.190621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.190646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.190835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.190862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.191056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.191084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.191245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.191271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.191403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.191447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.191657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.191684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.191853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.191882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.192073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.192100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.192269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.192294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.192462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.192487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.192701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.192728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.192925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.192950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.193142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.193167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.193359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.193387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.193565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.193592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.193782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.193807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.194000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.194028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.194216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.194243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.194404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.194429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.194612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.194640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.194850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.194884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.195049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.195074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.195213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.195253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.195433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.195461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.195644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.195669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.633 [2024-07-13 20:22:09.195858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.633 [2024-07-13 20:22:09.195891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.633 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.196043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.196068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.196237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.196262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.196475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.196503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.196682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.196709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.196935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.196960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.197125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.197153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.197372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.197400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.197592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.197617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.197806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.197834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.198018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.198051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.198214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.198239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.198451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.198479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.198698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.198722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.198895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.198931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.199147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.199175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.199333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.199360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.199571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.199596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.199788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.199816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.200013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.200039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.200175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.200200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.200381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.200408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.200617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.200644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.200835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.200860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.201072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.201101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.201288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.201315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.201525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.201550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.201706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.201734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.201899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.201928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.202092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.202118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.202306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.202334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.202513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.202541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.202744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.202772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.202964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.202990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.203125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.203166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.203355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.203379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.203571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.203598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.203780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.203812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.204020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.204046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.204269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.204297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.204452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.204480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.634 [2024-07-13 20:22:09.204679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.634 [2024-07-13 20:22:09.204704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.634 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.204895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.204923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.205072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.205099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.205267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.205292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.205472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.205499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.205680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.205707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.205874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.205899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.206092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.206117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.206341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.206366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.206534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.206559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.206784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.206812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.207026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.207055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.207221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.207246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.207405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.207430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.207617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.207645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.207827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.207852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.208065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.208093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.208284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.208309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.208480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.208504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.208694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.208721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.208946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.208972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.209109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.209134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.209265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.209290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.209501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.209533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.209699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.209726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.209953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.209979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.210123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.210165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.210343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.210368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.210553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.210581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.210758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.210785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.210973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.210998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.211181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.211208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.211422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.211450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.211658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.211683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.211875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.211914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.635 [2024-07-13 20:22:09.212068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.635 [2024-07-13 20:22:09.212096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.635 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.212318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.212343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.212551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.212577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.212789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.212816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.212983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.213009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.213149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.213174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.213402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.213429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.213590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.213615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.213788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.213813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.214003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.214032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.214220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.214245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.214455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.214482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.214634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.214661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.214817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.214842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.214996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.215022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.215210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.215237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.215455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.215480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.215674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.215701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.215881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.215916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.216090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.216115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.216257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.216298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.216480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.216507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.216720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.216744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.216918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.216949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.217143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.217169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.217362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.217388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.217548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.217575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.217734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.217763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.217970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.217996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.218219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.218251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.218440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.218468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.218641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.218667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.218830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.218877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.219069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.219095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.219290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.219315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.219474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.219502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.219686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.219714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.219882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.219909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.220097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.220125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.220309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.220337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.220551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.220576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.636 [2024-07-13 20:22:09.220764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.636 [2024-07-13 20:22:09.220792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.636 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.220988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.221018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.221240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.221265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.221495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.221520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.221690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.221715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.221893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.221919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.222112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.222140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.222334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.222359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.222523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.222548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.222735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.222763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.222948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.222977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.223147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.223172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.223339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.223364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.223533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.223558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.223689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.223714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.223847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.223883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.224094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.224122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.224317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.224342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.224487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.224512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.224682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.224707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.224902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.224944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.225114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.225157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.225351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.225379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.225541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.225566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.225782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.225810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.225971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.226000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.226196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.226221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.226391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.226416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.226596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.226624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.226842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.226873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.227068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.227093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.227263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.227289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.227454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.227479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.227676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.227704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.227884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.227912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.228097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.228122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.228340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.228368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.228564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.228592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.228769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.228795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.228949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.228977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.229189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.229217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.637 [2024-07-13 20:22:09.229383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.637 [2024-07-13 20:22:09.229410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.637 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.229626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.229658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.229834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.229859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.230038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.230063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.230227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.230252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.230475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.230503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.230722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.230746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.230965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.230994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.231190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.231215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.231383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.231408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.231597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.231625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.231836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.231864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.232065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.232090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.232238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.232264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.232458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.232483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.232710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.232736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.232895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.232937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.233103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.233128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.233308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.233333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.233524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.233552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.233731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.233759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.233945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.233971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.234156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.234184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.234372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.234400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.234583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.234608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.234782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.234807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.234996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.235025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.235179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.235204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.235369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.235394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.235618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.235645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.235810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.235835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.236024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.236050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.236246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.236272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.236480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.236506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.236723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.236751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.236935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.236964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.237135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.237161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.237301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.237343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.237553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.237578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.237717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.237742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.237885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.237929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.638 qpair failed and we were unable to recover it. 00:34:21.638 [2024-07-13 20:22:09.238093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.638 [2024-07-13 20:22:09.238120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.238337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.238363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.238533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.238558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.238728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.238754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.238898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.238925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.239087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.239112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.239283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.239308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.239454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.239480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.239665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.239693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.239835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.239863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.240077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.240103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.240284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.240311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.240498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.240526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.240684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.240710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.240898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.240927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.241140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.241168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.241325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.241350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.241487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.241512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.241678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.241720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.241910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.241937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.242089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.242117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.242306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.242334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.242521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.242546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.242713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.242741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.242922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.242952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.243146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.243171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.243363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.243391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.243608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.243633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.243806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.243835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.243983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.244009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.244193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.244221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.244384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.244409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.244618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.244646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.244813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.244840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.245031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.245057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.639 [2024-07-13 20:22:09.245242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.639 [2024-07-13 20:22:09.245270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.639 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.245416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.245443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.245658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.245683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.245910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.245936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.246132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.246158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.246389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.246414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.246558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.246583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.246729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.246770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.246935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.246961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.247109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.247134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.247335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.247363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.247540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.247565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.247720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.247748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.247961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.247990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.248160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.248185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.248355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.248380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.248575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.248599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.248780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.248805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.248996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.249023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.249204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.249230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.249366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.249395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.249536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.249561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.249773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.249800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.249990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.250016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.250235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.250263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.250416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.250444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.250659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.250684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.250876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.250904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.251110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.251138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.251346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.251371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.251555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.251583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.251782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.251807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.251975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.252001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.252195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.252223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.252413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.252441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.252653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.252678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.252875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.252917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.253086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.253111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.253349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.253375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.253536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.253564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.253777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.253805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.640 [2024-07-13 20:22:09.253966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.640 [2024-07-13 20:22:09.253992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.640 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.254179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.254207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.254390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.254417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.254611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.254636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.254776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.254818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.255021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.255047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.255191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.255216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.255409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.255437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.255617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.255645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.255837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.255877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.256031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.256065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.256239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.256265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.256427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.256452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.256640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.256670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.256885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.256915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.257103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.257129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.257327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.257356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.257507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.257535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.257746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.257771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.257985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.258014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.258230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.258263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.258501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.258527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.258743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.258771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.258933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.258962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.259124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.259149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.259336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.259364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.259554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.259580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.259765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.259793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.259983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.260009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.260231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.260259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.260470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.260506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.260694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.260723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.260908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.260937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.641 [2024-07-13 20:22:09.261132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.641 [2024-07-13 20:22:09.261157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.641 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.261307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.261333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.261551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.261580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.261742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.261766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.261980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.262009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.262195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.262223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.262436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.262461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.262630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.262657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.262853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.262887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.263040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.263066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.263243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.263274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.263426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.263456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.263685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.263711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.263903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.263932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.264094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.264129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.264326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.264357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.264523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.264551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.264735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.264765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.921 [2024-07-13 20:22:09.264958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.921 [2024-07-13 20:22:09.264995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.921 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.265162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.265191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.265338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.265366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.265578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.265605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.265800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.265832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.266119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.266148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.266310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.266335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.266524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.266552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.266745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.266770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.266915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.266941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.267114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.267140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.267321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.267349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.267519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.267544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.267760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.267788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.267978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.268007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.268179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.268204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.268366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.268393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.268572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.268599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.268803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.268830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.269033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.269058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.269225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.269253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.269419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.269444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.269617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.269642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.269791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.269839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.270077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.270103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.270276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.270301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.270481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.270508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.270671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.270695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.270839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.270863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.271001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.271026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.271196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.271222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.271394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.271418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.271551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.271576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.271744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.271771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.271991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.272021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.272238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.272266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.272482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.272507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.272655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.272679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.272879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.272908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.273098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.273123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.273342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.273369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.922 qpair failed and we were unable to recover it. 00:34:21.922 [2024-07-13 20:22:09.273548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.922 [2024-07-13 20:22:09.273575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.273778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.273805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.274023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.274049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.274221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.274246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.274386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.274411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.274572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.274599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.274781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.274808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.275003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.275028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.275217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.275247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.275434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.275467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.275652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.275677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.275851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.275884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.276034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.276059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.276223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.276248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.276430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.276458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.276647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.276671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.276838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.276863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.277084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.277111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.277306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.277334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.277524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.277549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.277737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.277765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.277919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.277948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.278110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.278136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.278283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.278308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.278470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.278495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.278638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.278664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.278848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.278882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.279081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.279106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.279273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.279297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.279512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.279540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.279694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.279722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.279887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.279913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.280084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.280126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.280282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.280309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.280524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.280549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.280754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.280782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.280963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.280992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.281160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.281185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.281320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.281345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.281543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.281572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.281744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.923 [2024-07-13 20:22:09.281787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.923 qpair failed and we were unable to recover it. 00:34:21.923 [2024-07-13 20:22:09.281979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.282005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.282205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.282233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.282477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.282523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.282711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.282736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.282885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.282911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.283083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.283108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.283305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.283330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.283543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.283569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.283755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.283783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.283952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.283981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.284140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.284168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.284361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.284386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.284528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.284553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.284721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.284746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.284917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.284946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.285144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.285169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.285309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.285333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.285499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.285524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.285685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.285713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.285923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.285950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.286108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.286136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.286352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.286380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.286564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.286592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.286762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.286787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.286930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.286955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.287106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.287134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.287410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.287459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.287682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.287706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.287917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.287946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.288134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.288159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.288301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.288341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.288531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.288555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.288735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.288763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.288987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.289013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.289163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.289188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.289356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.289380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.289571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.289603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.289776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.289804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.289990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.290016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.290183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.290208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.290389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.924 [2024-07-13 20:22:09.290417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.924 qpair failed and we were unable to recover it. 00:34:21.924 [2024-07-13 20:22:09.290604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.290631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.290820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.290848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.291052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.291079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.291291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.291319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.291511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.291536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.291733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.291757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.291965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.291992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.292164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.292190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.292417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.292444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.292721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.292746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.292920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.292946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.293159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.293187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.293367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.293395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.293612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.293637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.293778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.293803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.293939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.293965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.294128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.294156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.294334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.294361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.294552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.294576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.294717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.294742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.294910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.294936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.295148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.295199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.295400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.295428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.295620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.295648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.295821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.295846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.295986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166b0f0 is same with the state(5) to be set 00:34:21.925 [2024-07-13 20:22:09.296229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.296268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.296504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.296547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.296753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.296798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.296972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.296999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.297225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.297268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.297467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.297495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.297719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.297777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.925 qpair failed and we were unable to recover it. 00:34:21.925 [2024-07-13 20:22:09.297947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.925 [2024-07-13 20:22:09.297973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.298167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.298210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.298457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.298483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.298676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.298707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.298881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.298907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.299132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.299175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.299380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.299423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.299640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.299683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.299830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.299855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.300013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.300040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.300236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.300278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.300554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.300603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.300774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.300801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.301009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.301052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.301252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.301279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.301487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.301531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.301674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.301699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.301848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.301878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.302080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.302124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.302318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.302361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.302670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.302719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.302913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.302942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.303153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.303196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.303340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.303367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.303563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.303606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.303752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.303777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.303967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.304009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.304230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.304273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.304411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.304437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.304604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.304630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.304803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.304828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.305034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.305077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.305302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.305345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.305572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.305614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.305785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.305810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.305968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.306011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.306243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.306286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.306475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.306519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.306693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.306718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.306949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.306992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.307193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.926 [2024-07-13 20:22:09.307236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.926 qpair failed and we were unable to recover it. 00:34:21.926 [2024-07-13 20:22:09.307431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.307474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.307668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.307693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.307861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.307895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.308067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.308092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.308317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.308359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.308549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.308591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.308790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.308815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.309006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.309032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.309200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.309243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.309461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.309503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.309670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.309695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.309910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.309936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.310102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.310145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.310306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.310349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.310531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.310573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.310774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.310800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.310987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.311013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.311207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.311252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.311471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.311514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.311663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.311690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.311884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.311919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.312085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.312128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.312322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.312351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.312531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.312574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.312743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.312769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.312935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.312979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.313179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.313222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.313443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.313486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.313686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.313711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.313912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.313956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.314117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.314160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.314383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.314426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.314632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.314675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.314884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.314910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.315081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.315123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.315292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.315334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.315552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.315595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.315803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.315829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.316028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.316054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.927 [2024-07-13 20:22:09.316250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.927 [2024-07-13 20:22:09.316292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.927 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.316459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.316502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.316674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.316699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.316877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.316918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.317138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.317181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.317376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.317419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.317582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.317626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.317772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.317797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.318005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.318049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.318232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.318274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.318470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.318513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.318655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.318681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.318821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.318846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.319065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.319092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.319277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.319320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.319485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.319529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.319697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.319723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.319916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.319945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.320183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.320226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.320419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.320448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.320640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.320666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.320831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.320857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.321021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.321065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.321259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.321301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.321497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.321539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.321686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.321712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.321923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.321950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.322115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.322159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.322380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.322423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.322592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.322618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.322788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.322813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.322981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.323010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.323209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.323235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.323431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.323475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.323650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.323675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.323817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.323843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.324049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.324093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.324293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.324323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.324483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.324513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.324808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.324863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.325061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.325086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.928 qpair failed and we were unable to recover it. 00:34:21.928 [2024-07-13 20:22:09.325295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.928 [2024-07-13 20:22:09.325323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.325531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.325559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.325919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.325953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.326164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.326192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.326469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.326517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.326725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.326753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.326951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.326977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.327126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.327150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.327349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.327377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.327564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.327591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.327765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.327790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.327960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.327986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.328152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.328177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.328345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.328385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.328569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.328596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.328773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.328800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.329014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.329054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.329231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.329276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.329473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.329519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.329705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.329749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.329938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.329983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.330164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.330191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.330357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.330401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.330595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.330628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.330794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.330820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.331004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.331035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.331192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.331220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.331385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.331415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.331575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.331603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.331810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.331843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.332015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.332041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.332214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.332242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.332420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.332448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.332638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.332665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.332845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.332876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.333018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.333043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.333187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.333213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.333407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.333435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.333594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.333621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.333794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.333822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.334014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.334040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.929 qpair failed and we were unable to recover it. 00:34:21.929 [2024-07-13 20:22:09.334206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.929 [2024-07-13 20:22:09.334231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.334393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.334421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.334638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.334667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.334810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.334838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.335008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.335034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.335204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.335229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.335424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.335449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.335608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.335633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.335788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.335815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.336006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.336032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.336172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.336198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.336387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.336415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.336594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.336622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.336779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.336809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.336988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.337014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.337152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.337177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.337362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.337389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.337590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.337640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.337824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.337851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.338021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.338046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.338187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.338212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.338405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.338430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.338575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.338599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.338774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.338802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.338993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.339019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.339194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.339219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.339380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.339408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.339620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.339648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.339827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.339852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.340004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.340039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.340191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.340217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.340383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.340441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.340669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.340715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.340984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.341012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.930 [2024-07-13 20:22:09.341210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.930 [2024-07-13 20:22:09.341240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.930 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.341514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.341565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.341748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.341776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.341942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.341969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.342193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.342223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.342409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.342439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.342623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.342651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.342810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.342836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.342988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.343014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.343162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.343187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.343370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.343398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.343579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.343607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.343760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.343788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.344001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.344027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.344174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.344199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.344391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.344418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.344605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.344633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.344842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.344878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.345068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.345093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.345224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.345249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.345389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.345414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.345609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.345634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.345852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.345919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.346095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.346121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.346262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.346297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.346525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.346569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.346762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.346815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.347000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.347027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.347249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.347295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.347462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.347491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.347705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.347748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.347898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.347935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.348164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.348209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.348409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.348453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.348625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.348660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.348841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.348877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.349056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.349102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.349327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.349378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.349577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.349621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.349790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.349817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.350016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.350070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.931 [2024-07-13 20:22:09.350297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.931 [2024-07-13 20:22:09.350342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.931 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.350562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.350590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.350738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.350764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.350959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.351007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.351179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.351229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.351427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.351471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.351665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.351690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.351858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.351892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.352102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.352148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.352356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.352402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.352575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.352628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.352781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.352806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.353000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.353045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.353286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.353331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.353490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.353535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.353714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.353740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.353937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.353982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.354178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.354224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.354396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.354445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.354614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.354640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.354810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.354846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.355028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.355076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.355266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.355311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.355519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.355548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.355731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.355759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.355980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.356034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.356229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.356275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.356477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.356521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.356668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.356694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.356888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.356914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.357116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.357142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.357334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.357380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.357683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.357744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.357961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.358005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.358197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.358245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.358467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.358510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.358682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.358708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.358915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.358941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.359169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.359197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.359398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.359440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.932 [2024-07-13 20:22:09.359583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.932 [2024-07-13 20:22:09.359609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.932 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.359766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.359794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.360001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.360045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.360222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.360266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.360487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.360517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.360679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.360706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.360855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.360891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.361060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.361113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.361311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.361355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.361551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.361595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.361763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.361790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.361980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.362026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.362224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.362268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.362464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.362508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.362682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.362708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.362861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.362894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.363091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.363138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.363360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.363405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.363616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.363660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.363824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.363850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.364084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.364128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.364359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.364405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.364608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.364651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.364855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.364887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.365082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.365126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.365336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.365380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.365606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.365650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.365836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.365862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.366036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.366080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.366306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.366351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.366570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.366614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.366757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.366783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.366953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.366979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.367201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.367247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.367472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.367526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.367677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.367703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.367893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.367920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.368095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.368121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.368325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.368370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.368574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.368618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.368794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.933 [2024-07-13 20:22:09.368820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.933 qpair failed and we were unable to recover it. 00:34:21.933 [2024-07-13 20:22:09.368974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.369001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.369209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.369237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.369444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.369488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.369712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.369755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.369929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.369960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.370173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.370216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.370414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.370458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.370658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.370704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.370910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.370937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.371137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.371180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.371375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.371420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.371621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.371650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.371862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.371892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.372086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.372130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.372324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.372354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.372604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.372651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.372828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.372854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.373034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.373062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.373246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.373294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.373466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.373513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.373682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.373713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.373914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.373940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.374128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.374172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.374357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.374406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.374592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.374620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.374825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.374851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.375048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.375096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.375310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.375354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.375548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.375591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.375743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.375770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.375990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.376041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.376231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.376260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.376437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.376481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.376652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.376678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.934 qpair failed and we were unable to recover it. 00:34:21.934 [2024-07-13 20:22:09.376885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.934 [2024-07-13 20:22:09.376912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.377131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.377159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.377380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.377424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.377592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.377641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.377838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.377871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.378045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.378070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.378258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.378302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.378524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.378554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.378712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.378739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.378956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.379000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.379201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.379251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.379411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.379455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.379602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.379628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.379806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.379833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.380035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.380080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.380290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.380318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.380498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.380543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.380711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.380737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.380925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.380965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.381139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.381184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.381404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.381448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.381620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.381647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.381794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.381820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.382023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.382068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.382274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.382322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.382490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.382534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.382727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.382757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.382973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.383016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.383194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.383237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.383393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.383438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.383660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.383709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.383903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.383948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.384115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.384159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.384357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.384403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.384611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.384654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.384881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.384912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.385098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.385127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.385334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.385362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.385541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.385569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.385742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.385766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.385911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.935 [2024-07-13 20:22:09.385937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.935 qpair failed and we were unable to recover it. 00:34:21.935 [2024-07-13 20:22:09.386084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.386111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.386265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.386290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.386477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.386505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.386691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.386718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.386885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.386913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.387054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.387080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.387249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.387273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.387415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.387440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.387622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.387649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.387814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.387838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.388021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.388047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.388212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.388240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.388455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.388487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.388681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.388709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.388871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.388896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.389031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.389056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.389199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.389226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.389395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.389423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.389617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.389659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.389840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.389875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.390034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.390059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.390252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.390277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.390409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.390434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.390610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.390637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.390821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.390848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.391014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.391039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.391187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.391212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.391349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.391374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.391533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.391560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.391788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.391816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.392028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.392054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.392232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.392260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.392468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.392496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.392692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.392720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.392876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.392923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.393067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.393091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.393257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.393282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.393414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.393439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.393605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.393632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.393791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.393819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.394002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.936 [2024-07-13 20:22:09.394028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.936 qpair failed and we were unable to recover it. 00:34:21.936 [2024-07-13 20:22:09.394161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.394185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.394380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.394405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.394573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.394598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.394781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.394809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.394992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.395018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.395177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.395202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.395365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.395390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.395548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.395572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.395768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.395796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.395973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.396000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.396173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.396198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.396339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.396364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.396583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.396611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.396770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.396798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.396985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.397016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.397158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.397183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.397342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.397367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.397532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.397557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.397750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.397778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.397971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.397996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.398163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.398188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.398408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.398465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.398654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.398681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.398877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.398921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.399060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.399085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.399281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.399309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.399453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.399478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.399669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.399710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.399875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.399900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.400099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.400124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.400287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.400315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.400499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.400526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.400702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.400729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.400896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.400922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.401070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.401095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.401316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.401343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.401543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.401596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.401774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.401802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.401999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.402024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.402210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.402248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.402451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.937 [2024-07-13 20:22:09.402481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.937 qpair failed and we were unable to recover it. 00:34:21.937 [2024-07-13 20:22:09.402715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.402758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.402930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.402956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.403132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.403157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.403362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.403389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.403553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.403579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.403775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.403800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.403951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.403978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.404149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.404175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.404368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.404411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.404609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.404652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.404828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.404855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.405039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.405071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.405325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.405353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.405619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.405676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.405854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.405889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.406064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.406089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.406256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.406283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.406460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.406487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.406699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.406726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.406927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.406953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.407123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.407165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.407378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.407405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.407586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.407638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.407848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.407893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.408059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.408084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.408263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.408288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.408484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.408512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.408733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.408785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.408986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.409011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.409182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.409207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.409367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.409394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.409608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.409636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.409813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.409840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.410017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.410042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.410206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.938 [2024-07-13 20:22:09.410231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.938 qpair failed and we were unable to recover it. 00:34:21.938 [2024-07-13 20:22:09.410441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.410469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.410656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.410683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.410871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.410896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.411033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.411058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.411218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.411245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.411462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.411489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.411671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.411699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.411860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.411895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.412059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.412083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.412302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.412330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.412516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.412543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.412797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.412846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.413034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.413059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.413247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.413274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.413507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.413549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.413739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.413766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.413938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.413965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.414174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.414213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.414446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.414490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.414682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.414726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.414935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.414961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.415158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.415204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.415396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.415439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.415744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.415793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.415981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.416007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.416204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.416248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.416518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.416570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.416778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.416803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.416970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.416996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.417195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.417239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.417421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.417453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.417636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.417679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.417813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.417838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.418045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.418073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.418233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.418259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.418445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.418487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.418653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.418678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.418843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.418874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.419079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.419121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.419319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.419361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.939 [2024-07-13 20:22:09.419556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.939 [2024-07-13 20:22:09.419599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.939 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.419749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.419775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.419968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.420011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.420193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.420237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.420422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.420449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.420618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.420643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.420838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.420863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.421039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.421082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.421313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.421356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.421523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.421565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.421742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.421767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.421950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.421994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.422165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.422209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.422427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.422470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.422627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.422653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.422847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.422877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.423069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.423113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.423310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.423352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.423577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.423619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.423794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.423821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.424016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.424060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.424237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.424263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.424444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.424487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.424681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.424707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.424909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.424936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.425107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.425150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.425341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.425385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.425553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.425578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.425774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.425799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.425988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.426033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.426222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.426270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.426433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.426475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.426645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.426672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.426842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.426871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.427037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.427066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.427290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.427334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.427528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.427570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.427739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.427765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.427930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.427959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.428174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.428219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.428418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.940 [2024-07-13 20:22:09.428461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.940 qpair failed and we were unable to recover it. 00:34:21.940 [2024-07-13 20:22:09.428632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.428657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.428825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.428850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.429024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.429066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.429237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.429279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.429499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.429541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.429738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.429764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.429978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.430007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.430248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.430290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.430487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.430516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.430680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.430706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.430905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.430931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.431121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.431164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.431333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.431375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.431561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.431604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.431797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.431822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.431995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.432039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.432215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.432258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.432477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.432519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.432666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.432692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.432833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.432858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.433064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.433106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.433302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.433330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.433525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.433551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.433699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.433726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.433871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.433898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.434092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.434135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.434321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.434364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.434553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.434596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.434749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.434774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.434965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.435013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.435182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.435225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.435421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.435451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.435665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.435690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.435862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.435909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.436093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.436138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.436317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.436359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.436588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.436631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.436776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.436802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.436971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.437015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.437215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.437259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.437454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.941 [2024-07-13 20:22:09.437502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.941 qpair failed and we were unable to recover it. 00:34:21.941 [2024-07-13 20:22:09.437673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.437699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.437846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.437879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.438081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.438124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.438317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.438345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.438528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.438571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.438742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.438769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.438937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.438981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.439179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.439222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.439413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.439442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.439651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.439677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.439875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.439901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.440059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.440102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.440294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.440322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.440536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.440564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.440771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.440796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.440953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.440992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.441165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.441195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.441408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.441436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.441642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.441670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.441853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.441890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.442086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.442111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.442298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.442326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.442492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.442520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.442703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.442730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.442920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.442946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.443113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.443159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.443365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.443392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.443626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.443677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.443859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.443894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.444079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.444104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.444248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.444272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.444488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.444515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.444740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.444767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.444952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.444978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.445195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.445222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.445388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.445415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.445626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.445654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.445829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.445856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.446056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.446081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.446222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.446247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.446431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.942 [2024-07-13 20:22:09.446459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.942 qpair failed and we were unable to recover it. 00:34:21.942 [2024-07-13 20:22:09.446647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.446675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.446863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.446899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.447038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.447063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.447239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.447267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.447474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.447502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.447657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.447684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.447876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.447904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.448095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.448119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.448305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.448332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.448514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.448541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.448778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.448827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.449005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.449030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.449242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.449269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.449536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.449564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.449783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.449811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.449989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.450014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.450221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.450246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.450435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.450463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.450629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.450657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.450815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.450839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.451015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.451040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.451205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.451233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.451593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.451654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.451837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.451864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.452035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.452060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.452307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.452355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.452564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.452591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.452748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.452775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.452959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.452989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.453175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.453202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.453358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.453386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.453777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.453834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.454033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.454058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.454245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.454273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.454471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.943 [2024-07-13 20:22:09.454520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.943 qpair failed and we were unable to recover it. 00:34:21.943 [2024-07-13 20:22:09.454670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.454698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.454862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.454895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.455050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.455075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.455239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.455266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.455480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.455508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.455797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.455847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.456046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.456072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.456268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.456296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.456512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.456537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.456698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.456726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.456936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.456965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.457128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.457153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.457337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.457364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.457546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.457574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.457742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.457766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.457984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.458012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.458199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.458227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.458408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.458432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.458604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.458631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.458815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.458844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.459042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.459067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.459287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.459315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.459496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.459523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.459683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.459708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.459845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.459880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.460054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.460079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.460215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.460241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.460441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.460469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.460655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.460682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.460839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.460864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.461029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.461057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.461241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.461269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.461484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.461509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.461701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.461728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.461910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.461939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.462100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.462125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.462311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.462338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.462491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.462519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.462712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.462736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.462886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.462914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.463062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.944 [2024-07-13 20:22:09.463089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.944 qpair failed and we were unable to recover it. 00:34:21.944 [2024-07-13 20:22:09.463273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.463298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.463470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.463495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.463682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.463709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.463919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.463945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.464089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.464114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.464309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.464336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.464501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.464526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.464718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.464746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.464896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.464925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.465118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.465143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.465299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.465329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.465542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.465570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.465755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.465780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.465949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.465975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.466159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.466186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.466349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.466374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.466560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.466588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.466748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.466776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.466958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.466983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.467134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.467159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.467341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.467372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.467531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.467556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.467738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.467765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.467920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.467948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.468141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.468166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.468364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.468388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.468542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.468569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.468761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.468787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.468948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.468977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.469156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.469184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.469348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.469374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.469590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.469618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.469805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.469833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.470052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.470077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.470271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.470300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.470509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.470537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.470702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.470728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.470873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.470898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.471044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.471069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.471265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.471289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.471512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.945 [2024-07-13 20:22:09.471537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.945 qpair failed and we were unable to recover it. 00:34:21.945 [2024-07-13 20:22:09.471702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.471727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.471918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.471944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.472136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.472164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.472359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.472384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.472557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.472581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.472769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.472797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.472973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.473006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.473167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.473193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.473386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.473414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.473602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.473630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.473845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.473877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.474064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.474092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.474276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.474300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.474444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.474469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.474663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.474688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.474916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.474942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.475108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.475133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.475325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.475350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.475575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.475603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.475792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.475816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.475997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.476023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.476215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.476243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.476429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.476454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.476646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.476673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.476886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.476914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.477098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.477123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.477341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.477369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.477540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.477565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.477737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.477762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.477930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.477955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.478089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.478114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.478311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.478335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.478556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.478583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.478774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.478806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.478987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.479012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.479160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.479203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.479417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.479442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.479603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.479628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.479813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.479841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.480053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.480096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.480325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.946 [2024-07-13 20:22:09.480352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.946 qpair failed and we were unable to recover it. 00:34:21.946 [2024-07-13 20:22:09.480518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.480548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.480702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.480731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.480949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.480976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.481191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.481220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.481418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.481446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.481633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.481659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.481855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.481891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.482103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.482131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.482320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.482346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.482508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.482536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.482721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.482749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.482944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.482971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.483121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.483146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.483340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.483366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.483540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.483565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.483733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.483758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.483926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.483953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.484118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.484143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.484309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.484335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.484528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.484563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.484722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.484747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.484939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.484969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.485164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.485191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.485389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.485415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.485580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.485610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.485769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.485797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.485991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.486019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.486211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.486240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.486452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.486480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.486639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.486664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.486876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.486906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.487065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.487093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.487284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.487309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.487460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.487486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.487702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.947 [2024-07-13 20:22:09.487731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.947 qpair failed and we were unable to recover it. 00:34:21.947 [2024-07-13 20:22:09.487903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.487929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.488117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.488145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.488337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.488366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.488590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.488615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.488822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.488847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.489021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.489047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.489197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.489224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.489407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.489436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.489630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.489656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.489827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.489853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.490001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.490028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.490194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.490221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.490421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.490446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.490611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.490637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.490791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.490819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.491012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.491038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.491213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.491238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.491371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.491397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.491598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.491623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.491810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.491838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.492033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.492062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.492230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.492256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.492414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.492443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.492632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.492657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.492829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.492859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.493052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.493081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.493269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.493298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.493485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.493510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.493703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.493731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.493895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.493925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.494119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.494145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.494361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.494389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.494669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.494720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.494939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.494966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.495128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.495157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.495342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.495371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.495587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.495613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.495784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.495810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.495998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.496028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.496211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.496236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.948 [2024-07-13 20:22:09.496409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.948 [2024-07-13 20:22:09.496435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.948 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.496602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.496628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.496796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.496821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.497027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.497056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.497237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.497266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.497429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.497456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.497610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.497640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.497859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.497890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.498089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.498115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.498338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.498366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.498558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.498587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.498784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.498810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.498977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.499007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.499186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.499215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.499427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.499453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.499618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.499646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.499831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.499859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.500060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.500085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.500232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.500259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.500426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.500452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.500598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.500624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.500794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.500819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.501007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.501036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.501196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.501222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.501412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.501445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.501662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.501690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.501882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.501909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.502084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.502110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.502298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.502327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.502518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.502544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.502730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.502758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.502978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.503007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.503183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.503208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.503352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.503377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.503582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.503608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.503804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.503830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.504038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.504068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.504252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.504280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.504446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.504473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.504694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.504723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.504910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.949 [2024-07-13 20:22:09.504939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.949 qpair failed and we were unable to recover it. 00:34:21.949 [2024-07-13 20:22:09.505166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.505192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.505347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.505375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.505566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.505594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.505812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.505838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.506035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.506064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.506248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.506277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.506459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.506485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.506642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.506670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.506878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.506907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.507101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.507126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.507321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.507350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.507558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.507587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.507778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.507804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.507977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.508003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.508209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.508234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.508427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.508452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.508640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.508668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.508820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.508850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.509031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.509057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.509267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.509296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.509482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.509511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.509720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.509749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.509947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.509973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.510146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.510176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.510319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.510344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.510512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.510537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.510729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.510754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.510956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.510982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.511196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.511225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.511413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.511441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.511627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.511653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.511796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.511822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.511989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.512016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.512185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.512211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.512419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.512447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.512622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.512650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.512861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.512895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.513053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.513078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.513227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.513253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.513420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.513446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.950 qpair failed and we were unable to recover it. 00:34:21.950 [2024-07-13 20:22:09.513607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.950 [2024-07-13 20:22:09.513636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.513834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.513860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.514019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.514046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.514230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.514258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.514468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.514496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.514660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.514687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.514879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.514908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.515072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.515100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.515261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.515288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.515505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.515534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.515761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.515787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.515951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.515978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.516190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.516219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.516405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.516433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.516624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.516649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.516791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.516816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.517008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.517037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.517205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.517232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.517395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.517425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.517610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.517638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.517835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.517861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.518039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.518064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.518263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.518288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.518519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.518549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.518767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.518795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.518975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.519005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.519194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.519219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.519365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.519391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.519559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.519584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.519749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.519775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.519918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.519944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.520075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.520101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.520271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.520297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.520486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.520514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.951 [2024-07-13 20:22:09.520662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.951 [2024-07-13 20:22:09.520691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.951 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.520882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.520917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.521077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.521106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.521320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.521349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.521559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.521585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.521797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.521826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.522029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.522058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.522238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.522264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.522403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.522429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.522628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.522657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.522835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.522861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.523017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.523062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.523273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.523301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.523460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.523485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.523671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.523699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.523844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.523885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.524076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.524103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.524318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.524348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.524561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.524589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.524752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.524778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.524963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.524993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.525222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.525251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.525446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.525471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.525700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.525728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.525894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.525925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.526140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.526166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.526318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.526347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.526504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.526533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.526747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.526773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.526969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.527004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.527193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.527222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.527405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.527431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.527574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.527600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.527749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.527776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.527944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.527971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.528141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.528168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.528365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.528396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.528591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.528617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.528833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.528862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.529093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.529122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.529311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.952 [2024-07-13 20:22:09.529338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.952 qpair failed and we were unable to recover it. 00:34:21.952 [2024-07-13 20:22:09.529537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.529566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.529747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.529775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.529996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.530023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.530183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.530213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.530429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.530458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.530656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.530691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.530863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.530895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.531059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.531089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.531288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.531314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.531517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.531547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.531729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.531758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.531955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.531982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.532199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.532228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.532392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.532420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.532613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.532638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.532807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.532833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.533012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.533042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.533229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.533255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.533446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.533475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.533665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.533694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.533883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.533916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.534105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.534133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.534318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.534346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.534513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.534539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.534713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.534739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.534931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.534961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.535124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.535160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.535381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.535410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.535560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.535594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.535778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.535804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.536020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.536049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.536264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.536293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.536484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.536510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.536731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.536760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.536932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.536962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.537126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.537151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.537348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.537376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.537558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.537587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.537746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.537771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.537984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.538014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.538231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.953 [2024-07-13 20:22:09.538259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.953 qpair failed and we were unable to recover it. 00:34:21.953 [2024-07-13 20:22:09.538470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.538496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.538715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.538744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.538928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.538958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.539142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.539168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.539383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.539412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.539627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.539654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.539795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.539822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.540003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.540029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.540170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.540212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.540424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.540450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.540610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.540638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.540848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.540884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.541113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.541139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.541298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.541327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.541511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.541539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.541708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.541734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.541948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.541977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.542161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.542191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.542353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.542378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.542568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.542597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.542783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.542811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.543025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.543052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.543273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.543301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.543485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.543513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.543727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.543752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.543900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.543937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.544107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.544132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.544303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.544333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.544561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.544590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.544787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.544813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.545010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.545036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.545248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.545277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.545460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.545489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.545683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.545709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.545883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.545910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.546080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.546105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.546285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.546311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.546474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.546503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.546689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.546717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.546886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.546912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.954 qpair failed and we were unable to recover it. 00:34:21.954 [2024-07-13 20:22:09.547113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.954 [2024-07-13 20:22:09.547141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.547355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.547384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.547541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.547567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.547735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.547760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.547980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.548010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.548181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.548207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.548422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.548450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.548606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.548635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.548831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.548856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.549059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.549089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.549283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.549312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.549510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.549536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.549735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.549760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.549954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.549983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.550167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.550193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.550370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.550396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.550614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.550643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.550863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.550894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.551049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.551078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.551229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.551257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.551445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.551471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.551631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.551661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.551843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.551884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.552112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.552138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.552325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.552354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.552545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.552571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.552753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.552779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.553010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.553045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.553227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.553256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.553421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.553446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.553657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.553686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.553846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.553882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.554075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.554100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.554271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.554297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.554491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.554516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.554728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.554753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.554960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.554987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.555128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.555154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.555318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.555345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.555537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.555565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.955 qpair failed and we were unable to recover it. 00:34:21.955 [2024-07-13 20:22:09.555734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.955 [2024-07-13 20:22:09.555761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.555946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.555973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.556154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.556182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.556372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.556398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.556569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.556594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.556727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.556752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.556894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.556921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.557093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.557119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.557373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.557401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.557567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.557596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.557785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.557810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:21.956 [2024-07-13 20:22:09.557964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:21.956 [2024-07-13 20:22:09.557995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:21.956 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.558207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.558237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.558414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.558439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.558640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.558685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.558892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.558926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.559143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.559169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.559390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.559418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.559590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.559615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.559793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.559818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.560037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.560065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.560292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.560317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.560487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.560512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.560799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.560848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.561051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.561080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.561273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.561298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.561507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.561570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.561758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.561789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.561962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.561988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.562182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.562211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.562415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.237 [2024-07-13 20:22:09.562443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.237 qpair failed and we were unable to recover it. 00:34:22.237 [2024-07-13 20:22:09.562700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.562725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.562947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.562973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.563160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.563188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.563407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.563432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.563710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.563762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.563926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.563955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.564204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.564230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.564577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.564639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.564854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.564891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.565110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.565135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.565287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.565312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.565495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.565522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.565739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.565763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.566020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.566049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.566269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.566297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.566457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.566482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.566676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.566704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.566890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.566928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.567121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.567146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.567340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.567392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.567592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.567617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.567821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.567846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.568054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.568098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.568265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.568297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.568486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.568512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.568708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.568768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.568977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.569012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.569184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.569211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.569476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.569527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.569711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.569739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.569942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.569969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.570127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.570157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.570338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.570368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.570560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.570586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.570792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.570820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.571049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.571075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.571275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.571301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.571592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.571643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.571823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.571851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.572041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.238 [2024-07-13 20:22:09.572068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.238 qpair failed and we were unable to recover it. 00:34:22.238 [2024-07-13 20:22:09.572284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.572313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.572519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.572548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.572705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.572731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.572950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.572979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.573178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.573207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.573403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.573428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.573575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.573602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.573804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.573829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.574041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.574067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.574284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.574313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.574532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.574561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.574724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.574750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.574936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.574965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.575128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.575157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.575348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.575374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.575609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.575634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.575803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.575828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.576005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.576032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.576217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.576245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.576453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.576481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.576669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.576695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.576886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.576916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.577130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.577158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.577379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.577409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.577558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.577584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.577749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.577775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.577958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.577984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.578151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.578177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.578346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.578372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.578586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.578612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.578755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.578780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.578950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.578977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.579145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.579171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.579347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.579372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.579541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.579567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.579737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.579763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.579956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.579986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.580153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.580179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.580344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.580370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.580591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.239 [2024-07-13 20:22:09.580619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.239 qpair failed and we were unable to recover it. 00:34:22.239 [2024-07-13 20:22:09.580802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.580831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.581005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.581031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.581258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.581286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.581449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.581477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.581664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.581690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.581862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.581894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.582087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.582116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.582285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.582311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.582479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.582505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.582675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.582701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.582893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.582920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.583086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.583112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.583283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.583309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.583503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.583528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.583718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.583747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.583910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.583939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.584124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.584150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.584339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.584367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.584524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.584553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.584741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.584769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.584986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.585012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.585177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.585203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.585344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.585371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.585542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.585573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.585721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.585747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.585888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.585921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.586070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.586098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.586270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.586296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.586464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.586489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.586656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.586681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.586824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.586852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.587031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.587058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.587229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.587256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.587432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.587457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.587623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.587648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.587816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.587842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.587992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.588018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.588195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.588221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.588411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.588439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.588610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.588636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.240 [2024-07-13 20:22:09.588830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.240 [2024-07-13 20:22:09.588856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.240 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.589030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.589057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.589206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.589231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.589396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.589422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.589587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.589612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.589782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.589809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.589975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.590002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.590180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.590208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.590431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.590460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.590629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.590656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.590832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.590858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.591014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.591041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.591214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.591241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.591481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.591507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.591649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.591675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.591872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.591899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.592071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.592099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.592312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.592338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.592488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.592513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.592682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.592708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.592857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.592900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.593049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.593075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.593212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.593239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.593381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.593411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.593612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.593639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.593779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.593805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.594000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.594043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.594209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.594235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.594434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.594459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.594609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.594634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.594807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.594832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.594986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.241 [2024-07-13 20:22:09.595012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.241 qpair failed and we were unable to recover it. 00:34:22.241 [2024-07-13 20:22:09.595189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.595215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.595358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.595384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.595523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.595549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.595695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.595722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.595901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.595928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.596071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.596098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.596266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.596292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.596465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.596491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.596660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.596687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.596828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.596855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.597031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.597057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.597234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.597259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.597402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.597429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.597574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.597600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.597770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.597796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.597956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.597982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.598143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.598170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.598336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.598362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.598509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.598536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.598708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.598734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.598875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.598902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.599038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.599063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.599243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.599268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.599442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.599468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.599632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.599661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.599845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.599877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.600028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.600054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.600250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.600275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.600418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.600443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.600584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.600610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.600760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.600786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.600957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.600987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.601135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.601162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.601360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.601386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.601556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.601582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.601728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.601753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.601921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.601948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.602090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.602116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.602285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.602310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.602452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.602495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.242 qpair failed and we were unable to recover it. 00:34:22.242 [2024-07-13 20:22:09.602708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.242 [2024-07-13 20:22:09.602733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.602883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.602910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.603114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.603140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.603315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.603341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.603510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.603537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.603685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.603712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.603883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.603909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.604090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.604116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.604282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.604307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.604479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.604505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.604648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.604675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.604819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.604845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.604994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.605020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.605165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.605190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.605332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.605358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.605502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.605527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.605697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.605723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.605876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.605902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.606085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.606111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.606256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.606283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.606453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.606478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.606650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.606676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.606841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.606872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.607011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.607037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.607228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.607254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.607393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.607419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.607612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.607654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.607822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.607848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.608009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.608035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.608221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.608250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.608437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.608463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.608605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.608636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.608830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.608855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.609054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.609080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.609224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.609251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.609416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.609442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.609660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.609685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.609851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.609884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.610056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.610084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.610276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.610303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.610449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.243 [2024-07-13 20:22:09.610476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.243 qpair failed and we were unable to recover it. 00:34:22.243 [2024-07-13 20:22:09.610695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.610720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.610885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.610912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.611079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.611105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.611277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.611303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.611457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.611482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.611679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.611722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.611923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.611955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.612139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.612165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.612391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.612441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.612618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.612646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.612831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.612857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.613030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.613056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.613251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.613279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.613439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.613464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.613639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.613666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.613864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.613902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.614098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.614123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.614343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.614372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.614532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.614562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.614771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.614797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.614967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.614997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.615156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.615185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.615405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.615430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.615653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.615702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.615880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.615906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.616068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.616094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.616342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.616368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.616563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.616588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.616753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.616778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.616946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.616976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.617187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.617221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.617406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.617431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.617601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.617650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.617872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.617902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.618065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.618090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.618281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.618309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.618491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.618520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.618711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.618737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.618882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.618908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.619097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.619124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.244 [2024-07-13 20:22:09.619317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.244 [2024-07-13 20:22:09.619342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.244 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.619543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.619568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.619767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.619795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.619985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.620011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.620187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.620227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.620390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.620418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.620637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.620664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.620845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.620880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.621038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.621068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.621259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.621284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.621478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.621530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.621743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.621771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.622024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.622050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.622243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.622270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.622473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.622498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.622667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.622692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.622962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.622991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.623183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.623209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.623454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.623480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.623680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.623707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.623889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.623919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.624084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.624109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.624290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.624315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.624500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.624529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.624704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.624729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.624901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.624927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.625118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.625147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.625359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.625385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.625567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.625593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.625806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.625835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.626031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.626061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.626223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.626249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.626428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.626465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.626661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.626697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.626886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.626918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.627100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.627128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.627315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.627341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.627555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.627583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.627745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.627770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.627947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.628000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.628173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.245 [2024-07-13 20:22:09.628203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.245 qpair failed and we were unable to recover it. 00:34:22.245 [2024-07-13 20:22:09.628400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.628440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.628678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.628714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.628906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.628948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.629187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.629227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.629466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.629503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.629676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.629711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.629890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.629919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.630094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.630122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.630285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.630311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.630483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.630519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.630776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.630813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.630998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.631034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.246 qpair failed and we were unable to recover it. 00:34:22.246 [2024-07-13 20:22:09.631232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.246 [2024-07-13 20:22:09.631272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.631490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.631527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.631748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.631788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.631986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.632025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.632274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.632311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.632508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.632547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.632803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.632831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.633008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.633034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.633206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.633231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.633438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.633464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.633611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.633636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.633799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.633827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.634048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.634075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.634274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.634311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.634505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.634541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.634724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.634764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.634965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.635002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.635196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.635257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.635465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.635502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.635695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.635730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.635960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.635999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.636222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.636259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.636426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.636461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.636632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.636668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.636889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.636926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.637113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.637140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.637329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.637357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.637539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.637567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.637772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.637797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.637966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.637995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.638176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.638203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.638402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.638436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.638655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.638690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.638858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.638902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.639071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.639107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.639273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.639311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.639508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.639543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.639759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.639795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.639962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.639998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.640231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.640262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.247 [2024-07-13 20:22:09.640474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.247 [2024-07-13 20:22:09.640500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.247 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.640706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.640731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.640947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.640976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.641173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.641198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.641398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.641424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.641599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.641635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.641850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.641894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.642108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.642149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.642353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.642390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.642565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.642601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.642798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.642833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.643006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.643041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.643236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.643273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.643448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.643483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.643645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.643681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.643904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.643939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.644116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.644156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.644345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.644391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.644588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.644624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.644844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.644921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.645133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.645163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.645347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.645372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.645514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.645539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.645708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.645738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.645927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.645953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.646121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.646148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.646350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.646377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.646550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.646586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.646756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.646791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.646986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.647022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.647220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.647256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.647457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.647493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.647671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.647707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.647901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.647937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.648158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.648190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.648369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.648397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.648558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.648583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.648732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.648758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.648928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.648955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.649150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.649176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.649332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.248 [2024-07-13 20:22:09.649358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.248 qpair failed and we were unable to recover it. 00:34:22.248 [2024-07-13 20:22:09.649532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.649567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.649759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.649795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.650013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.650050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.650248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.650286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.650472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.650498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.650670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.650698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.650909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.650939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.651100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.651126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.651265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.651290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.651486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.651511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.651681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.651706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.651844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.651874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.652012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.652037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.652208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.652233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.652393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.652418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.652580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.652605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.652797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.652821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.652977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.653002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.653168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.653193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.653325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.653349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.653482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.653507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.653668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.653694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.653855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.653889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.654035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.654060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.654223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.654247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.654483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.654508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.654676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.654701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.654837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.654862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.655015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.655040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.655212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.655239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.655528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.655583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.655770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.655795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.655977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.656003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.656168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.656196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.656384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.656409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.656548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.656573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.656768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.656793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.656931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.656956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.657132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.657157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.657303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.657329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.657469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.249 [2024-07-13 20:22:09.657495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.249 qpair failed and we were unable to recover it. 00:34:22.249 [2024-07-13 20:22:09.657659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.657683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.657853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.657884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.658012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.658037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.658188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.658213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.658355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.658380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.658542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.658567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.658724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.658751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.658948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.658974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.659167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.659192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.659358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.659387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.659556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.659586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.659760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.659785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.659983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.660009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.660179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.660204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.660372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.660397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.660593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.660620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.660787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.660815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.660986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.661011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.661158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.661183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.661345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.661370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.661537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.661562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.661722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.661747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.661941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.661967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.662129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.662154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.662285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.662309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.662508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.662535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.662752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.662777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.662920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.662945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.663089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.663114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.663284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.663308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.663479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.663504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.663697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.663724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.663931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.663957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.664121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.664146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.664308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.664333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.664505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.664529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.664683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.664711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.664887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.664930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.665070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.665095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.665241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.250 [2024-07-13 20:22:09.665265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.250 qpair failed and we were unable to recover it. 00:34:22.250 [2024-07-13 20:22:09.665409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.665434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.665570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.665594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.665737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.665761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.665968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.665997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.666131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.666156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.666326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.666350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.666525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.666553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.666717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.666741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.666885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.666911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.667070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.667095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.667256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.667281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.667450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.667474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.667656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.667683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.667877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.667903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.668071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.668096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.668234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.668259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.668399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.668423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.668593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.668617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.668759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.668785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.668945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.668971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.669150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.669177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.669337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.669361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.669524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.669548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.669694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.669719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.669882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.669907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.670045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.670070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.670278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.670305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.670447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.670473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.670671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.670696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.670876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.670903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.671047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.671074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.671295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.671320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.671478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.671504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.671679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.671706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.671864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.671896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.672092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.672119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.251 qpair failed and we were unable to recover it. 00:34:22.251 [2024-07-13 20:22:09.672262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.251 [2024-07-13 20:22:09.672289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.672481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.672507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.672668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.672695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.672870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.672898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.673062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.673088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.673216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.673241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.673401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.673428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.673608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.673633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.673795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.673822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.673983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.674011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.674193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.674219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.674374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.674401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.674577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.674605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.674783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.674810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.674990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.675018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.675205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.675232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.675421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.675446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.675589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.675614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.675783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.675808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.675947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.675973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.676184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.676211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.676393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.676420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.676610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.676636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.676826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.676854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.677033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.677061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.677217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.677242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.677439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.677464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.677656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.677684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.677857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.677889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.678036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.678062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.678202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.678227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.678423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.678447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.678636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.678664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.678838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.678873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.679033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.679060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.679255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.679288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.679499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.679527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.679752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.679777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.679974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.680003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.680165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.680193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.680379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.252 [2024-07-13 20:22:09.680404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.252 qpair failed and we were unable to recover it. 00:34:22.252 [2024-07-13 20:22:09.680616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.680644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.680799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.680827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.681023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.681049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.681239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.681266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.681478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.681505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.681695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.681719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.681879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.681908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.682101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.682125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.682277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.682302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.682443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.682468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.682629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.682654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.682786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.682811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.682952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.682977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.683136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.683164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.683359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.683383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.683552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.683576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.683794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.683823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.684001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.684027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.684178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.684203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.684426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.684454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.684640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.684665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.684850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.684888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.685035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.685062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.685255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.685280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.685464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.685492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.685642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.685670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.685859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.685891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.686054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.686080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.686226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.686251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.686385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.686410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.686623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.686651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.686828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.686857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.687039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.687064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.687206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.687232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.687378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.687403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.687539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.687565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.687732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.687758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.687885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.687912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.688058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.688085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.688295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.688323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.688504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.253 [2024-07-13 20:22:09.688532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.253 qpair failed and we were unable to recover it. 00:34:22.253 [2024-07-13 20:22:09.688723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.688748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.688942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.688969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.689169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.689197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.689357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.689382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.689571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.689598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.689774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.689801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.689997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.690023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.690191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.690216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.690362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.690387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.690557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.690582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.690717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.690743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.690912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.690956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.691133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.691159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.691343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.691371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.691557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.691584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.691771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.691796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.691966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.691997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.692199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.692226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.692439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.692465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.692663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.692691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.692874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.692903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.693093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.693118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.693335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.693363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.693551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.693576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.693754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.693779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.693951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.693978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.694145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.694173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.694338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.694364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.694551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.694578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.694738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.694766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.694958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.694986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.695178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.695205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.695376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.695403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.695592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.695617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.695812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.695840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.696037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.696065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.696278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.696304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.696488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.696516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.696704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.696733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.696929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.696956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.697116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.254 [2024-07-13 20:22:09.697144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.254 qpair failed and we were unable to recover it. 00:34:22.254 [2024-07-13 20:22:09.697350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.697377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.697590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.697615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.697775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.697803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.697956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.697985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.698169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.698194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.698403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.698430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.698586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.698613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.698805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.698834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.699013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.699039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.699262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.699290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.699451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.699476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.699653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.699695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.699917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.699946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.700136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.700162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.700361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.700389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.700540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.700568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.700720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.700745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.700942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.700972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.701181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.701209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.701422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.701447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.701680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.701705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.701882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.701909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.702086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.702111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.702309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.702338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.702529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.702554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.702699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.702724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.702869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.702895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.703071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.703096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.703260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.703285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.703476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.703506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.703716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.703743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.703908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.703934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.704084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.704109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.704255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.704280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.255 [2024-07-13 20:22:09.704442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.255 [2024-07-13 20:22:09.704471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.255 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.704638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.704663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.704817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.704847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.705049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.705075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.705226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.705250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.705417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.705442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.705606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.705632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.705788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.705815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.706016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.706042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.706236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.706261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.706420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.706448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.706643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.706668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.706803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.706828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.707061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.707090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.707257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.707285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.707437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.707462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.707671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.707699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.707884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.707914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.708104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.708129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.708290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.708318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.708504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.708532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.708711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.708737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.708937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.708963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.709122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.709149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.709307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.709333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.709474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.709517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.709697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.709725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.709919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.709949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.710110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.710138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.710327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.710355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.710521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.710546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.710733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.710764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.710970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.710998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.711193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.711223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.711374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.711402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.711612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.711639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.711802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.711827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.711996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.712022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.712209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.712237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.712396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.712421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.712617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.712645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.256 [2024-07-13 20:22:09.712833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.256 [2024-07-13 20:22:09.712861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.256 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.713039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.713066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.713286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.713314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.713502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.713529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.713711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.713736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.713926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.713954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.714170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.714198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.714392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.714416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.714589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.714614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.714827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.714855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.715031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.715056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.715233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.715261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.715443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.715471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.715663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.715690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.715879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.715908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.716089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.716117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.716274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.716299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.716486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.716514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.716729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.716771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.716958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.716984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.717209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.717237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.717428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.717456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.717612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.717637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.717781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.717823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.718006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.718032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.718193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.718218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.718412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.718440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.718653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.718681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.718873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.718899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.719090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.719118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.719332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.719360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.719542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.719568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.719714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.719739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.719925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.719953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.720116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.720141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.720352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.720379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.720607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.720636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.720799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.720824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.721016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.721045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.721251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.721279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.721463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.257 [2024-07-13 20:22:09.721488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.257 qpair failed and we were unable to recover it. 00:34:22.257 [2024-07-13 20:22:09.721680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.721707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.721887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.721915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.722095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.722121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.722277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.722305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.722495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.722523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.722718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.722742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.722928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.722956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.723167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.723195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.723356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.723383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.723539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.723567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.723752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.723780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.723967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.723993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.724151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.724179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.724336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.724368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.724587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.724611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.724801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.724829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.725009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.725035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.725176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.725201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.725400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.725428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.725635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.725663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.725879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.725905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.726079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.726105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.726262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.726290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.726492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.726517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.726660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.726685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.726853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.726883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.727075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.727100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.727294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.727322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.727471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.727498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.727714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.727739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.727904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.727932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.728115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.728143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.728297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.728322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.728491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.728515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.728659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.728683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.728851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.728883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.729064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.729092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.729299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.729327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.729513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.729538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.729703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.729728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.729918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.729954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.258 qpair failed and we were unable to recover it. 00:34:22.258 [2024-07-13 20:22:09.730148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.258 [2024-07-13 20:22:09.730173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.730343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.730368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.730559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.730588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.730746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.730771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.730933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.730960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.731134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.731159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.731358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.731383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.731575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.731603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.731783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.731811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.732009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.732034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.732205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.732230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.732405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.732429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.732572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.732597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.732750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.732777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.732966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.732995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.733184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.733209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.733379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.733406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.733593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.733618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.733810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.733836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.734069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.734098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.734271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.734296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.734468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.734493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.734629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.734654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.734837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.734873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.735038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.735063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.735269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.735297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.735504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.735531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.735706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.735731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.735877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.735903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.736100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.736128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.736321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.736348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.736495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.736520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.736700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.736727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.736921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.736948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.737108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.259 [2024-07-13 20:22:09.737136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.259 qpair failed and we were unable to recover it. 00:34:22.259 [2024-07-13 20:22:09.737321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.737349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.737562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.737586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.737737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.737762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.737937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.737966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.738135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.738160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.738342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.738370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.738563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.738591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.738775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.738801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.738988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.739017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.739202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.739230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.739404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.739429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.739618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.739646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.739828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.739856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.740086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.740111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.740271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.740296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.740463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.740504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.740667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.740692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.740904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.740933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.741135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.741163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.741329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.741354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.741572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.741600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.741791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.741816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.741990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.742018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.742206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.742233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.742404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.742429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.742624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.742650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.742834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.742862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.743028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.743057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.743243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.743269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.743426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.743455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.743646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.743674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.743836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.743861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.744044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.744073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.744261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.744288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.744506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.744531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.744690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.744718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.744894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.744921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.745079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.745104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.745272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.745316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.745536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.745561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.745758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.260 [2024-07-13 20:22:09.745783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.260 qpair failed and we were unable to recover it. 00:34:22.260 [2024-07-13 20:22:09.745976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.746005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.746181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.746208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.746367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.746393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.746534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.746576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.746790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.746818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.747019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.747045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.747215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.747240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.747382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.747407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.747570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.747595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.747758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.747786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.747948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.747978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.748141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.748168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.748356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.748383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.748560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.748588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.748754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.748779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.748953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.748996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.749165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.749193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.749356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.749383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.749525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.749572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.749762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.749790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.749975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.750000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.750160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.750186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.750385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.750410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.750608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.750633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.750788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.750816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.750989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.751015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.751185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.751211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.751403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.751428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.751649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.751676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.751871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.751897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.752041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.752067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.752260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.752289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.752487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.752513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.752660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.752684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.752893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.752935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.753129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.753154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.753316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.753344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.753529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.753557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.753776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.753801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.753988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.754014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.754178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.261 [2024-07-13 20:22:09.754202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.261 qpair failed and we were unable to recover it. 00:34:22.261 [2024-07-13 20:22:09.754397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.754423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.754585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.754613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.754793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.754821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.755014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.755040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.755199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.755231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.755419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.755448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.755663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.755688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.755857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.755888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.756031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.756056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.756215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.756240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.756427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.756455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.756641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.756669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.756889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.756915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.757085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.757110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.757321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.757349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.757536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.757561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.757779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.757807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.757999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.758025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.758166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.758191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.758409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.758437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.758656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.758684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.758872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.758898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.759060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.759085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.759255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.759280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.759418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.759445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.759660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.759688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.759883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.759912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.760097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.760122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.760338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.760366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.760522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.760550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.760761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.760786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.760937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.760965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.761155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.761182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.761371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.761397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.761591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.761619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.761803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.761831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.762067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.762093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.762284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.762312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.762464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.762491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.762690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.762715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.762904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.762931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.262 qpair failed and we were unable to recover it. 00:34:22.262 [2024-07-13 20:22:09.763125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.262 [2024-07-13 20:22:09.763153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.763344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.763369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.763557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.763584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.763754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.763781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.764003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.764029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.764199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.764227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.764415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.764443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.764604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.764630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.764795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.764837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.765043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.765070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.765232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.765257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.765421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.765446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.765618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.765661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.765875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.765900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.766094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.766122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.766308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.766335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.766502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.766527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.766688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.766713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.766879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.766908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.767101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.767127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.767317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.767345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.767570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.767596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.767734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.767759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.767906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.767933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.768104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.768129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.768296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.768321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.768474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.768502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.768658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.768685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.768901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.768927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.769112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.769140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.769314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.769342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.769531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.769560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.769737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.769764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.769958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.769986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.770155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.770180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.770311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.770353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.770536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.770564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.770746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.770771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.770919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.263 [2024-07-13 20:22:09.770945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.263 qpair failed and we were unable to recover it. 00:34:22.263 [2024-07-13 20:22:09.771105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.771130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.771298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.771323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.771487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.771529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.771712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.771740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.771926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.771952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.772138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.772166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.772358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.772388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.772604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.772629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.772820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.772848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.773031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.773059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.773248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.773273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.773463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.773491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.773687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.773712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.773890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.773916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.774137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.774165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.774318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.774346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.774541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.774566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.774723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.774751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.774911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.774941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.775111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.775139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.775320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.775347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.775550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.775578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.775742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.775767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.775956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.775982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.776182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.776209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.776380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.776405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.776594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.776619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.776812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.776837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.777015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.777041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.777183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.777208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.777343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.777368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.777537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.777562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.777786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.777814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.778036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.778065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.778256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.778281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.778468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.778496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.778682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.778710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.264 [2024-07-13 20:22:09.778872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.264 [2024-07-13 20:22:09.778899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.264 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.779090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.779118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.779313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.779338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.779483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.779508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.779696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.779726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.779910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.779940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.780155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.780180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.780389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.780414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.780583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.780608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.780748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.780773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.780916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.780960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.781140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.781167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.781335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.781360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.781521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.781563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.781727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.781754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.781938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.781964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.782123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.782151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.782335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.782363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.782549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.782574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.782760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.782788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.782938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.782966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.783153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.783178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.783365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.783394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.783611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.783639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.783817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.783842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.784035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.784063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.784212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.784240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.784400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.784426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.784616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.784645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.784841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.784872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.785045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.785070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.785262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.785287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.785472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.785499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.785717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.785742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.785962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.785990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.786134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.786162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.786378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.786403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.786542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.786585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.786796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.786824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.786994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.787020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.787227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.787255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.265 [2024-07-13 20:22:09.787472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.265 [2024-07-13 20:22:09.787499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.265 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.787695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.787720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.787891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.787917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.788143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.788170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.788358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.788382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.788561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.788588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.788742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.788768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.788931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.788957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.789126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.789151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.789327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.789357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.789513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.789538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.789685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.789727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.789912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.789941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.790160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.790185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.790395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.790422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.790588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.790614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.790796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.790821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.790969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.790995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.791171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.791196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.791358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.791384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.791520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.791545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.791687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.791712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.791907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.791934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.792175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.792201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.792350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.792375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.792515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.792539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.792702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.792727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.792944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.792973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.793138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.793165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.793327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.793355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.793546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.793571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.793739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.793764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.793962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.793991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.794200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.794228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.794385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.794410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.794553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.794578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.794724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.794752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.794922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.794948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.795140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.795167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.795352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.795380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.795548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.795573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.266 [2024-07-13 20:22:09.795752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.266 [2024-07-13 20:22:09.795779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.266 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.795955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.795983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.796168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.796193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.796382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.796409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.796603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.796628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.796800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.796825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.796980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.797006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.797177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.797202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.797335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.797360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.797531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.797556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.797695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.797720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.797939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.797966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.798100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.798125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.798265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.798290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.798458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.798483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.798650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.798678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.798873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.798899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.799061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.799086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.799257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.799282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.799452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.799477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.799643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.799669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.799854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.799888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.800103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.800132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.800297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.800322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.800511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.800538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.800718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.800746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.800937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.800963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.801145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.801173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.801359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.801387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.801571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.801596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.801735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.801760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.801925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.801969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.802161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.802185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.802366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.802394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.802575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.802603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.802793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.802818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.802970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.802996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.803168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.803194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.803382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.803407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.803602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.803627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.803790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.803831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.267 qpair failed and we were unable to recover it. 00:34:22.267 [2024-07-13 20:22:09.804009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.267 [2024-07-13 20:22:09.804035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.804219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.804247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.804454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.804482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.804667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.804692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.804837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.804885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.805042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.805070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.805282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.805307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.805452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.805477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.805614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.805639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.805811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.805836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.806027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.806055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.806270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.806297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.806512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.806538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.806731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.806759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.806970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.806999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.807162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.807187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.807362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.807405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.807588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.807616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.807776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.807801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.807961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.807990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.808179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.808207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.808384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.808409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.808575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.808602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.808817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.808842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.808985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.809010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.809158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.809183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.809345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.809372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.809557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.809582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.809780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.809808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.810008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.810035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.810229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.810255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.810442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.810470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.810617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.810644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.810838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.810863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.811032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.811057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.268 [2024-07-13 20:22:09.811197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.268 [2024-07-13 20:22:09.811238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.268 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.811405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.811430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.811616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.811643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.811825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.811852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.812020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.812046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.812220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.812245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.812383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.812425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.812605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.812630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.812785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.812813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.813003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.813029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.813203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.813228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.813389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.813416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.813589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.813616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.813790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.813815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.813984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.814017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.814161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.814187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.814380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.814405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.814598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.814626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.814786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.814813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.814992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.815018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.815195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.815223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.815374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.815399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.815564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.815589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.815754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.815796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.816010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.816038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.816220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.816245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.816397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.816425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.816631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.816659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.816855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.816886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.817031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.817056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.817233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.817261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.817427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.817452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.817663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.817691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.817893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.817922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.818107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.818131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.818352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.818380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.818594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.818621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.818805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.818830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.819005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.819030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.819203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.819229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.819392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.819417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.269 [2024-07-13 20:22:09.819603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.269 [2024-07-13 20:22:09.819635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.269 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.819796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.819824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.819979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.820005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.820151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.820175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.820350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.820376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.820570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.820595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.820762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.820790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.820948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.820977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.821194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.821219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.821405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.821432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.821654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.821678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.821850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.821883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.822093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.822121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.822306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.822334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.822531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.822556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.822722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.822747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.822913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.822957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.823148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.823173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.823339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.823366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.823544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.823572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.823768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.823793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.823990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.824016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.824152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.824194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.824415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.824440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.824663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.824691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.824902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.824931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.825091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.825116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.825329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.825358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.825534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.825558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.825717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.825742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.825893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.825919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.826089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.826114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.826303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.826328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.826517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.826545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.826734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.826762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.826955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.826981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.827175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.827203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.827359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.827387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.827571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.827596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.827738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.827763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.827906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.827932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.828096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.270 [2024-07-13 20:22:09.828122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.270 qpair failed and we were unable to recover it. 00:34:22.270 [2024-07-13 20:22:09.828303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.828327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.828495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.828519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.828655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.828680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.828822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.828847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.829057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.829082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.829210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.829234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.829405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.829429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.829600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.829625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.829799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.829823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.830003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.830029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.830196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.830221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.830362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.830387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.830530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.830555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.830731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.830756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.830948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.830974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.831163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.831191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.831349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.831377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.831533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.831558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.831691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.831716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.831884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.831909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.832104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.832129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.832320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.832348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.832502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.832530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.832715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.832740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.832897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.832923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.833102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.833126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.833286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.833315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.833489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.833514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.833712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.833739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.833935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.833960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.834160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.834187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.834363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.834388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.834556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.834581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.834745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.834770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.834913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.834939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.835134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.835159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.835322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.835347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.835486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.835511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.835678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.835702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.835843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.835885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.836067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.271 [2024-07-13 20:22:09.836093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.271 qpair failed and we were unable to recover it. 00:34:22.271 [2024-07-13 20:22:09.836257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.836282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.836449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.836474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.836611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.836636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.836804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.836829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.836995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.837021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.837171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.837197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.837333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.837357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.837494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.837519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.837660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.837685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.837890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.837916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.838059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.838084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.838255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.838279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.838465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.838493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.838672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.838699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.838852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.838887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.839074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.839098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.839234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.839259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.839441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.839469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.839660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.839685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.839852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.839901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.840055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.840083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.840274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.840299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.840470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.840495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.840671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.840696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.840864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.840897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.841090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.841116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.841331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.841359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.841542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.841567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.841726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.841751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.841896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.841922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.842068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.842093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.842256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.842280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.842418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.842443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.842607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.842632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.842770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.272 [2024-07-13 20:22:09.842795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.272 qpair failed and we were unable to recover it. 00:34:22.272 [2024-07-13 20:22:09.842932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.842958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.843123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.843148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.843332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.843359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.843512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.843541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.843697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.843726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.843889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.843920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.844103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.844131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.844321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.844346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.844514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.844539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.844720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.844748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.844942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.844968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.845141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.845167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.845355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.845385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.845573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.845598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.845738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.845763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.845930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.845959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.846136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.846161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.846363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.846390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.846616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.846641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.846806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.846831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.847001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.847027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.847167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.847191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.847326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.847351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.847487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.847512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.847677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.847702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.847837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.847862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.848018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.848044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.848214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.848239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.848377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.848403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.848550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.848575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.848734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.848762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.848976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.849010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.849184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.849210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.849349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.849374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.849513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.849540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.849679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.849705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.849838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.849863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.850042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.850067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.850232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.850257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.850403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.850428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.850567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.273 [2024-07-13 20:22:09.850591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.273 qpair failed and we were unable to recover it. 00:34:22.273 [2024-07-13 20:22:09.850753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.850778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.850948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.850974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.851118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.851143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.851327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.851354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.851533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.851565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.851766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.851791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.851977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.852005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.852184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.852212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.852373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.852398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.852568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.852593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.852734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.852759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.852909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.852935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.853071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.853096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.853313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.853341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.853520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.853545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.853715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.853740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.853943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.853972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.854152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.854176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.854351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.854376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.854572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.854597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.854759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.854783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.854930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.854956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.855118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.855144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.855318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.855343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.855564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.855591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.855733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.855760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.855919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.855945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.856109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.856134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.856275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.856299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.856471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.856496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.856657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.856682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.856814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.856842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.857019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.857044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.857205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.857231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.857398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.857422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.857595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.857619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.857786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.857813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.858022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.858047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.858205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.858231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.858369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.858393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.858562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.274 [2024-07-13 20:22:09.858587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.274 qpair failed and we were unable to recover it. 00:34:22.274 [2024-07-13 20:22:09.858747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.858772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.858958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.858987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.859141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.859169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.859351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.859375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.859554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.859578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.859714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.859739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.859904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.859930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.860063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.860088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.860254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.860279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.860473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.860498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.860666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.860691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.860852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.860884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.861048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.861073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.861204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.861228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.861356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.861381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.861573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.861598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.861733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.861758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.861914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.861947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.862127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.862152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.862296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.862321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.862462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.862487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.862681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.862706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.862853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.862883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.863046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.863071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.863239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.863264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.863499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.863524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.863687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.863712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.863856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.863886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.864083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.864111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.864326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.864351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.864498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.864524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.864674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.864699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.864873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.864901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.865083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.865108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.865338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.865366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.865554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.865582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.865744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.865769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.865936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.865962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.866177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.866205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.866363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.866387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.866555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.866580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.275 qpair failed and we were unable to recover it. 00:34:22.275 [2024-07-13 20:22:09.866715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.275 [2024-07-13 20:22:09.866740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.866930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.866955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.867087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.867112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.867299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.867328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.867543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.867578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.867736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.867762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.867907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.867933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.868067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.868091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.868250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.868275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.868464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.868492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.868712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.868745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.868932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.868961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.869143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.869171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.869348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.869373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.869542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.869570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.869738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.869772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.869994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.870021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.870182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.870221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.870391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.870418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.870588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.870615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.870767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.870793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.870970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.870997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.871165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.871191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.871332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.871358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.871527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.871553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.871700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.871726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.871896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.871922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.872098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.872126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.276 [2024-07-13 20:22:09.872316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.276 [2024-07-13 20:22:09.872341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.276 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.872481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.872508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.872657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.872688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.872892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.872927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.873173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.873217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.873384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.873413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.873599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.873624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.873758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.873784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.873986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.874012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.874177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.874203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.874351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.874386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.874565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.874591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.874730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.874762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.874936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.874963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.875132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.875161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.875350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.875385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.875545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.875570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.875714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.875741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.875921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.875947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.876139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.876165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.876331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.876357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.876522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.876550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.876708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.876733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.876906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.876935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.877127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.877152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.877317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.877344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.877491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.877516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.877651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.877676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.877857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.877890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.878071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.557 [2024-07-13 20:22:09.878097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.557 qpair failed and we were unable to recover it. 00:34:22.557 [2024-07-13 20:22:09.878249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.878275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.878419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.878446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.878581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.878606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.878744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.878770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.878934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.878973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.879117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.879145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.879320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.879346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.879501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.879531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.879746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.879775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.879960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.879987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.880209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.880237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.880425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.880454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.880633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.880659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.880812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.880839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.881051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.881077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.881217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.881241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.881393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.881418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.881582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.881607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.881743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.881768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.881936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.881961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.882156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.882181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.882347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.882372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.882532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.882559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.882711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.882738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.882933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.882959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.883125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.883150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.883350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.883375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.883618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.883643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.883790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.883815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.884025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.884054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.884209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.884234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.884375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.884400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.884566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.884591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.884757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.884782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.884951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.884976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.885171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.885199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.885385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.885410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.885584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.885608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.885747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.885772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.885942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.558 [2024-07-13 20:22:09.885968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.558 qpair failed and we were unable to recover it. 00:34:22.558 [2024-07-13 20:22:09.886148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.886187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.886362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.886391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.886560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.886586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.886731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.886758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.886976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.887007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.887173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.887198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.887370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.887398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.887570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.887596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.887738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.887765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.887929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.887968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.888169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.888199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.888358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.888383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.888551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.888576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.888778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.888807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.889038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.889064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.889228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.889253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.889419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.889444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.889586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.889611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.889777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.889818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.889973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.890002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.890165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.890191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.890363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.890388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.890558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.890585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.890721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.890746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.890928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.890957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.891162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.891191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.891409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.891435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.891606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.891632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.891773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.891798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.891967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.891993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.892154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.892179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.892384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.892411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.892571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.892596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.892768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.892793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.892986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.893011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.893158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.893184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.893325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.893350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.893521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.893546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.893686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.893711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.893851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.893887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.894109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.894137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.559 qpair failed and we were unable to recover it. 00:34:22.559 [2024-07-13 20:22:09.894354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.559 [2024-07-13 20:22:09.894380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.894598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.894624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.894820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.894845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.895035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.895061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.895250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.895275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.895422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.895447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.895586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.895610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.895779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.895806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.895943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.895970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.896110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.896135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.896301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.896326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.896461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.896486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.896650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.896675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.896851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.896904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.897060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.897087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.897286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.897312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.897508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.897536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.897718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.897747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.897918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.897946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.898146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.898175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.898332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.898361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.898524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.898551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.898694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.898720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.898917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.898947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.899131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.899157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.899353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.899382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.899549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.899584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.899783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.899809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.900003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.900033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.900224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.900253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.900415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.900440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.900631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.900657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.900829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.900856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.901026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.901053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.901221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.901249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.901431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.901459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.901644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.901670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.901856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.901892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.902083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.902112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.902270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.902296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.902492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.902520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.560 qpair failed and we were unable to recover it. 00:34:22.560 [2024-07-13 20:22:09.902706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.560 [2024-07-13 20:22:09.902735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.902918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.902945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.903113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.903139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.903310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.903337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.903527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.903552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.903751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.903781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.903942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.903972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.904164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.904189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.904373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.904402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.904580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.904610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.904809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.904835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.905009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.905038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.905226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.905258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.905442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.905468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.905611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.905637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.905807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.905833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.906007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.906033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.906218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.906248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.906467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.906495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.906704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.906730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.906949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.906977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.907144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.907170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.907335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.907360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.907507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.907533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.907700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.907726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.907897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.907923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.908092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.908119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.908309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.908337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.908529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.908555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.908771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.908799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.908999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.909027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.909215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.909241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.909431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.909457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.561 qpair failed and we were unable to recover it. 00:34:22.561 [2024-07-13 20:22:09.909649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.561 [2024-07-13 20:22:09.909676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.909861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.909892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.910092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.910118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.910277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.910305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.910462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.910487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.910700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.910728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.910922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.910948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.911121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.911147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.911364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.911392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.911571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.911598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.911781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.911807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.911971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.911997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.912190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.912219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.912404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.912431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.912620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.912648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.912845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.912880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.913045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.913071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.913287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.913315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.913491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.913519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.913674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.913704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.913894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.913923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.914109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.914135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.914333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.914358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.914559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.914585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.914729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.914755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.914914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.914941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.915139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.915168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.915390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.915416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.915585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.915610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.915773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.915799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.915982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.916012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.916203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.916229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.916441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.916470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.916684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.916712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.916932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.916958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.917170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.917196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.917378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.917406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.917617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.917643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.917835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.917870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.918057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.918086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.562 [2024-07-13 20:22:09.918304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.562 [2024-07-13 20:22:09.918330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.562 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.918548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.918577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.918762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.918790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.918958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.918985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.919141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.919170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.919387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.919415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.919644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.919670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.919900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.919929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.920091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.920120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.920309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.920335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.920517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.920545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.920729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.920759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.920924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.920951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.921135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.921163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.921398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.921424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.921563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.921590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.921771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.921799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.922011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.922041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.922199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.922226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.922423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.922453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.922638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.922667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.922834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.922860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.923078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.923107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.923299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.923328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.923518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.923544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.923708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.923738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.923932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.923959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.924135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.924160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.924322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.924351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.924537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.924565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.924729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.924756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.924927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.924970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.925159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.925188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.925382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.925407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.925546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.925572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.925740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.925765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.925943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.925970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.926110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.926137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.926278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.926304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.926498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.926524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.926662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.926688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.563 [2024-07-13 20:22:09.926854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.563 [2024-07-13 20:22:09.926902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.563 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.927098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.927124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.927314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.927343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.927530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.927558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.927751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.927776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.927928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.927954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.928160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.928186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.928357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.928383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.928520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.928547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.928699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.928725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.928889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.928916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.929096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.929125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.929279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.929308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.929490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.929516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.929672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.929700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.929892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.929922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.930108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.930134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.930294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.930322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.930514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.930544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.930754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.930782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.930976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.931004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.931196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.931222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.931368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.931395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.931532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.931557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.931742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.931771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.931958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.931985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.932161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.932187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.932380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.932408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.932590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.932615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.932783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.932810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.932953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.932980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.933143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.933168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.933312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.933338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.933545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.933573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.933726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.933752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.933937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.933967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.934173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.934202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.934362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.934388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.934559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.934602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.934794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.934822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.935042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.935068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.564 qpair failed and we were unable to recover it. 00:34:22.564 [2024-07-13 20:22:09.935235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.564 [2024-07-13 20:22:09.935265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.935453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.935481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.935669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.935695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.935883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.935926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.936104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.936130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.936369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.936396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.936536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.936561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.936757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.936785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.936967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.936994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.937150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.937180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.937368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.937394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.937586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.937612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.937749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.937774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.937993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.938023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.938187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.938213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.938386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.938412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.938599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.938628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.938820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.938850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.939020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.939049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.939269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.939295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.939465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.939492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.939648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.939677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.939871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.939912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.940088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.940113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.940280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.940305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.940519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.940548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.940736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.940761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.940952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.940982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.941147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.941173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.941337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.941363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.941525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.941551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.941725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.941751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.565 [2024-07-13 20:22:09.941941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.565 [2024-07-13 20:22:09.941968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.565 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.942186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.942215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.942358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.942386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.942609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.942634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.942826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.942855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.943061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.943087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.943256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.943282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.943424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.943451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.943636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.943664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.943860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.943891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.944043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.944069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.944237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.944262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.944438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.944465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.944654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.944682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.944864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.944912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.945108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.945134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.945298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.945323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.945461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.945504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.945686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.945711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.945907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.945938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.946149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.946179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.946364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.946390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.946563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.946589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.946757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.946784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.946947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.946973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.947141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.947189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.947397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.947423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.947588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.947613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.947800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.947829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.948022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.948051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.948271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.948296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.948459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.948488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.566 [2024-07-13 20:22:09.948679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.566 [2024-07-13 20:22:09.948707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.566 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.948898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.948924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.949082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.949111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.949276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.949304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.949488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.949513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.949653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.949697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.949860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.949898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.950116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.950142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.950303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.950331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.950542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.950571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.950780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.950806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.951001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.951030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.951209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.951237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.951398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.951425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.951594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.951620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.951804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.951833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.952034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.952060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.952198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.952224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.952414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.952443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.952610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.952637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.952828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.952856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.953057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.953086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.953268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.953294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.953507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.953536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.953752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.953778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.953946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.953973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.954165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.954194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.954349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.954377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.954571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.954597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.954782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.954811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.955024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.955054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.955207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.955232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.955446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.955474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.955682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.955715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.955897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.955922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.956111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.956139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.956326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.956354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.956546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.956571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.956788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.956817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.957035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.957065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.567 [2024-07-13 20:22:09.957255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.567 [2024-07-13 20:22:09.957281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.567 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.957429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.957455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.957647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.957672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.957872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.957898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.958085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.958113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.958274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.958302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.958460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.958486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.958677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.958706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.958918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.958945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.959122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.959148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.959312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.959341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.959548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.959576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.959787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.959813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.960006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.960036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.960222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.960251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.960444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.960470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.960659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.960687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.960885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.960916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.961130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.961156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.961316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.961344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.961538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.961565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.961763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.961788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.961955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.961982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.962208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.962237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.962405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.962431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.962596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.962622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.962835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.962864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.963067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.963094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.963276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.963306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.963496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.963523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.963688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.963713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.963880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.963906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.964082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.964108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.964276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.964305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.964493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.964521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.964701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.964730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.964913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.964939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.965082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.965108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.965328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.965357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.965556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.965581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.965769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.965798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.568 qpair failed and we were unable to recover it. 00:34:22.568 [2024-07-13 20:22:09.965985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.568 [2024-07-13 20:22:09.966015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.966204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.966229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.966424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.966452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.966650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.966675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.966874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.966901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.967096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.967124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.967327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.967356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.967540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.967566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.967755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.967783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.967978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.968004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.968175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.968201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.968344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.968370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.968554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.968583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.968775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.968800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.968989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.969019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.969207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.969238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.969423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.969449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.969637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.969666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.969883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.969913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.970109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.970136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.970332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.970361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.970542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.970571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.970761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.970788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.970964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.970990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.971136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.971162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.971351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.971377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.971599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.971628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.971874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.971914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.972059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.972086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.972255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.972283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.972585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.972645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.972858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.972890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.973083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.973117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.973419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.973473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.973665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.973689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.973844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.973880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.974068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.974094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.974291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.974316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.974477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.974505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.974742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.974794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.974984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.975011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.569 qpair failed and we were unable to recover it. 00:34:22.569 [2024-07-13 20:22:09.975158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.569 [2024-07-13 20:22:09.975184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.975322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.975348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.975518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.975543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.975732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.975760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.975944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.975972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.976149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.976174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.976357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.976385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.976578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.976604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.976801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.976825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.977000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.977026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.977281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.977331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.977547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.977572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.977740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.977769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.977920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.977949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.978143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.978169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.978332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.978359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.978510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.978540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.978700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.978726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.978877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.978903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.979104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.979147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.979353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.979381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.979567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.979596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.979782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.979812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.980035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.980062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.980254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.980284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.980434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.980464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.980626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.980652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.980822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.980847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.981031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.981058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.981201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.981227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.981398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.981423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.981693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.981748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.981964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.981989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.982206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.982233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.982426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.982451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.570 qpair failed and we were unable to recover it. 00:34:22.570 [2024-07-13 20:22:09.982653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.570 [2024-07-13 20:22:09.982677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.982855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.982889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.983048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.983077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.983268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.983295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.983437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.983462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.983657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.983683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.983883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.983909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.984072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.984100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.984335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.984383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.984567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.984593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.984787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.984816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.985000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.985030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.985224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.985250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.985415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.985443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.985684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.985735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.985951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.985977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.986204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.986233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.986461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.986516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.986732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.986757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.986927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.986956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.987112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.987140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.987310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.987336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.987507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.987533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.987694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.987723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.987906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.987933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.988148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.988176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.988392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.988417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.988586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.988611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.988776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.988800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.988973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.989003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.989191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.989217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.989397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.989425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.989606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.989634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.989820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.989846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.990016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.990042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.990208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.990236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.990399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.990428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.990615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.990643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.990861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.990897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.991118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.991143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.991332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.571 [2024-07-13 20:22:09.991360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.571 qpair failed and we were unable to recover it. 00:34:22.571 [2024-07-13 20:22:09.991574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.991623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.991784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.991809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.991983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.992009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.992160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.992185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.992325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.992350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.992554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.992582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.992803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.992828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.992983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.993010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.993172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.993200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.993398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.993423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.993587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.993612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.993780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.993806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.993974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.994000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.994165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.994190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.994328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.994355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.994535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.994563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.994757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.994783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.994974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.995004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.995195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.995220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.995383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.995408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.995625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.995652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.995839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.995876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.996074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.996100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.996278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.996303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.996488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.996517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.996679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.996706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.996922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.996951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.997159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.997187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.997371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.997396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.997599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.997627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.997842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.997879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.998049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.998075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.998217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.998242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.998442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.998470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.998677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.998701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.998887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.998920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.999133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.999161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.999340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.999365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.999547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.999575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.999782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:09.999809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:09.999982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.572 [2024-07-13 20:22:10.000009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.572 qpair failed and we were unable to recover it. 00:34:22.572 [2024-07-13 20:22:10.000149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.000192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.000387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.000415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.000625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.000650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.000878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.000907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.001119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.001147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.001329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.001356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.001501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.001531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.001789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.001838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.002107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.002142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.002336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.002373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.002620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.002657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.002889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.002925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.003116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.003168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.003352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.003389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.003750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.003806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.004031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.004064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.004244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.004279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.004607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.004668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.004878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.004928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.005165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.005216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.005563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.005619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.005832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.005876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.006092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.006123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.006372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.006411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.006614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.006653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.006863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.006923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.007142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.007182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.007348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.007393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.007613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.007657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.007833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.007858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.008011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.008037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.008265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.008309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.008516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.008560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.008728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.008754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.008944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.008994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.009164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.009207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.009425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.009466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.009657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.009682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.009820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.009846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.010040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.010086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.010267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.573 [2024-07-13 20:22:10.010311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.573 qpair failed and we were unable to recover it. 00:34:22.573 [2024-07-13 20:22:10.010498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.010542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.010716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.010743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.010933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.010962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.011174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.011218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.011438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.011481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.011651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.011677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.011846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.011877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.012104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.012147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.012338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.012381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.012596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.012647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.012795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.012820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.012987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.013030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.013228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.013256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.013441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.013485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.013654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.013680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.013852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.013883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.014054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.014097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.014288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.014331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.014555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.014598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.014800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.014826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.015019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.015064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.015256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.015301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.015497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.015540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.015707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.015733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.015883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.015909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.016101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.016144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.016343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.016386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.016605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.016648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.016794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.016820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.017021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.017068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.017264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.017294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.017529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.017571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.017743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.017769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.017961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.018010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.018233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.018276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.574 [2024-07-13 20:22:10.018452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.574 [2024-07-13 20:22:10.018495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.574 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.018642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.018667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.018837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.018862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.019064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.019108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.019297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.019340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.019501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.019544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.019714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.019740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.019956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.020001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.020186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.020230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.020423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.020451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.020635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.020661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.020856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.020887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.021083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.021129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.021331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.021374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.021603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.021646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.021787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.021813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.021982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.022008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.022200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.022249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.022409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.022437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.022676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.022718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.022912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.022941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.023129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.023172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.023352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.023394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.026844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.026901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.027118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.027166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.027366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.027415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.027563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.027590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.027761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.027797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.028027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.028085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.028336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.028382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.028591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.028622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.028799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.028828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.029028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.029057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.029230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.029258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.029427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.029462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.029706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.029777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.029987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.030016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.030184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.030213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.030426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.030477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.030691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.030719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.030890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.030917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.575 [2024-07-13 20:22:10.031126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.575 [2024-07-13 20:22:10.031156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.575 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.031339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.031367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.031557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.031585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.031780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.031806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.031974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.032000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.032168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.032198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.032388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.032430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.032618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.032644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.032820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.032845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.033017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.033047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.033252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.033280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.033494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.033531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.033722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.033749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.033888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.033931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.034149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.034178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.034366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.034395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.034602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.034631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.034845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.034877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.035066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.035095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.035313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.035340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.035519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.035546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.035704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.035729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.035877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.035920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.036126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.036154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.036385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.036413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.036627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.036655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.036840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.036864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.037026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.037051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.037258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.037286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.037453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.037480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.037770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.037819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.037986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.038012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.038200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.038228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.038546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.038596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.038808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.038833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.038980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.039008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.039163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.039204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.039370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.039397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.039577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.039612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.039843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.039878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.040018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.576 [2024-07-13 20:22:10.040043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.576 qpair failed and we were unable to recover it. 00:34:22.576 [2024-07-13 20:22:10.040238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.040266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.040601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.040652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.040878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.040905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.041048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.041073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.041212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.041238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.041464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.041492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.041706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.041734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.041921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.041947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.042143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.042171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.042372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.042400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.042609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.042658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.042856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.042887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.043082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.043110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.043376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.043425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.043654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.043682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.043900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.043926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.044077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.044102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.044298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.044326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.044531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.044559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.044759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.044784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.044950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.044976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.045160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.045188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.045418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.045447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.045647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.045675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.045880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.045909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.046171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.046219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.046436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.046478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.046696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.046721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.046860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.046908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.047090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.047118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.047331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.047359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.047573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.047598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.047772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.047797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.048004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.048035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.048280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.048322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.048535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.048564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.048755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.048782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.048957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.048987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.049205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.049234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.049458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.049486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.049641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.049667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.577 [2024-07-13 20:22:10.049806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.577 [2024-07-13 20:22:10.049832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.577 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.049988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.050014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.050161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.050187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.050360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.050385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.050529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.050554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.050752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.050780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.050950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.050977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.051142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.051167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.051339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.051365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.051560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.051586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.051727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.051768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.051956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.051983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.052125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.052151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.052321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.052347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.052481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.052507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.052647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.052672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.052836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.052864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.053061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.053086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.053280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.053305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.053473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.053498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.053663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.053691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.053904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.053948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.054083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.054108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.054276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.054302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.054502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.054531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.054723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.054748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.054882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.054909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.055077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.055102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.055264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.055290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.055485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.055510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.055654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.055679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.055884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.055910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.056078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.056104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.056271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.056296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.056457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.056482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.056621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.056646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.056843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.056875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.057071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.057097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.578 qpair failed and we were unable to recover it. 00:34:22.578 [2024-07-13 20:22:10.057283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.578 [2024-07-13 20:22:10.057308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.057500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.057526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.057697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.057725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.057894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.057921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.058082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.058108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.058299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.058324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.058486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.058512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.058678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.058703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.058862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.058895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.059063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.059089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.059258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.059283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.059420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.059445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.059613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.059639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.059827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.059856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.060030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.060055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.060228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.060253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.060446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.060471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.060635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.060660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.060853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.060885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.061051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.061076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.061245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.061270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.061464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.061490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.061654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.061678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.061829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.061856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.062035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.062060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.062201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.062226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.062385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.062411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.062581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.062609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.062797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.062825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.063040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.063065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.063232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.063258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.063401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.063426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.063617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.063642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.063821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.063847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.064046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.064071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.064203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.064228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.064427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.064453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.064623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.064648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.064844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.064880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.579 [2024-07-13 20:22:10.065051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.579 [2024-07-13 20:22:10.065076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.579 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.065250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.065279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.065448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.065473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.065643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.065668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.065863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.065896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.066061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.066086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.066286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.066311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.066449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.066474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.066646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.066670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.066844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.066885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.067056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.067081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.067228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.067253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.067413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.067438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.067602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.067627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.067796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.067821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.067995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.068021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.068224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.068249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.068428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.068453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.068615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.068640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.068831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.068856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.069011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.069036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.069196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.069221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.069413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.069439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.069609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.069634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.069794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.069819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.070016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.070042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.070234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.070259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.070397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.070422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.070588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.070614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.070747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.070772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.070961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.070987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.071156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.071181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.071321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.071346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.071486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.071511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.071704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.071729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.071894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.071920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.072111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.072136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.072327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.072353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.072521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.072546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.072708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.072733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.072895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.072921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.073109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.580 [2024-07-13 20:22:10.073134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.580 qpair failed and we were unable to recover it. 00:34:22.580 [2024-07-13 20:22:10.073286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.073313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.073504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.073529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.073690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.073715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.073863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.073896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.074037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.074062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.074202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.074227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.074394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.074419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.074587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.074613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.074747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.074772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.074942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.074969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.075131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.075156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.075324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.075349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.075545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.075570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.075762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.075787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.075941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.075966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.076109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.076134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.076307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.076332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.076499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.076524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.076684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.076710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.076908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.076933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.077131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.077155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.077314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.077340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.077485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.077509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.077655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.077679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.077881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.077907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.078075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.078102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.078271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.078295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.078488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.078517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.078663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.078687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.078858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.078891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.079063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.079088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.079252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.079276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.079447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.079472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.079635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.079660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.079831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.079856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.080007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.080032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.080210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.080235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.080368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.080394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.080533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.080558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.080723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.080748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.080889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.080916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.581 qpair failed and we were unable to recover it. 00:34:22.581 [2024-07-13 20:22:10.081091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.581 [2024-07-13 20:22:10.081115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.081281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.081307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.081474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.081500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.081666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.081691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.081857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.081887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.082032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.082057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.082222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.082247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.082390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.082414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.082606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.082630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.082771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.082796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.082963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.082989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.083126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.083150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.083297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.083323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.083470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.083499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.083641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.083665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.083840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.083878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.084083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.084109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.084276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.084301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.084466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.084490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.084658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.084682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.084847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.084880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.085062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.085087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.085255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.085279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.085443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.085467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.085632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.085657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.085821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.085847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.086026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.086052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.086224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.086249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.086423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.086448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.086586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.086611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.086777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.086801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.086962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.086987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.087153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.087178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.087344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.087370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.087533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.087558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.087704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.087729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.087897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.087923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.088088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.088111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.088277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.088302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.088449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.088474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.088643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.088668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.582 [2024-07-13 20:22:10.088843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.582 [2024-07-13 20:22:10.088874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.582 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.089049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.089074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.089261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.089285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.089458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.089483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.089625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.089649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.089815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.089843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.090018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.090044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.090213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.090238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.090408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.090435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.090601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.090627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.090797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.090821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.090987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.091012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.091183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.091208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.091354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.091380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.091552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.091577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.091713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.091738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.091900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.091927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.092100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.092125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.092273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.092298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.092454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.092480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.092672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.092697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.092864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.092896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.093065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.093089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.093224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.093249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.093427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.093453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.093646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.093671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.093839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.093864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.094018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.094043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.094195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.094221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.094391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.094416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.094605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.094630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.094797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.094823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.094977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.095004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.095173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.095198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.095362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.095386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.095577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.095602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.583 [2024-07-13 20:22:10.095763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.583 [2024-07-13 20:22:10.095788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.583 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.095933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.095961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.096153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.096178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.096323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.096348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.096535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.096565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.096759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.096787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.096964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.096993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.097180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.097207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.097429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.097479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.097678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.097705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.097900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.097932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.098136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.098161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.098327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.098353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.098517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.098542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.098684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.098707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.098876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.098902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.099076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.099101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.099277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.099302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.099495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.099519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.099681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.099705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.099892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.099919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.100086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.100111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.100284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.100308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.100478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.100502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.100666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.100690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.100886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.100911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.101057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.101081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.101220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.101245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.101406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.101431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.101565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.101589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.101760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.101785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.101928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.101958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.102125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.102149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.102316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.102342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.102504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.102529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.102695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.102720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.102885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.102910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.103079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.103104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.103272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.103297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.103486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.103511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.103681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.103705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.103880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.584 [2024-07-13 20:22:10.103905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.584 qpair failed and we were unable to recover it. 00:34:22.584 [2024-07-13 20:22:10.104042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.104067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.104207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.104231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.104383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.104407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.104574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.104599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.104761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.104786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.104960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.104985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.105127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.105151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.105317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.105342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.105513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.105538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.105707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.105731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.105904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.105930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.106118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.106143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.106286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.106310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.106513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.106537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.106674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.106699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.106876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.106901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.107041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.107070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.107234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.107259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.107427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.107451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.107617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.107642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.107835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.107859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.108011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.108036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.108231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.108255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.108428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.108451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.108612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.108637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.108805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.108830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.108967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.108992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.109135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.109160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.109321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.109346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.109490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.109513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.109693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.109718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.109882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.109907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.110074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.110099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.110293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.110317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.110487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.110511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.110678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.110702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.110876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.110902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.111040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.111065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.111258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.111282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.111458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.111483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.111645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.585 [2024-07-13 20:22:10.111670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.585 qpair failed and we were unable to recover it. 00:34:22.585 [2024-07-13 20:22:10.111838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.111862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.112035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.112059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.112216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.112240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.112417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.112443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.112605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.112629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.112823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.112848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.113018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.113042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.113216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.113241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.113430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.113455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.113619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.113643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.113791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.113816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.113963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.113988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.114178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.114203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.114367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.114391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.114558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.114583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.114744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.114769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.114923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.114949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.115141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.115165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.115336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.115361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.115497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.115521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.115698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.115722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.115893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.115918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.116112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.116136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.116302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.116326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.116495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.116519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.116689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.116713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.116886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.116911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.117050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.117075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.117239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.117263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.117465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.117489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.117620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.117645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.117791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.117816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.117983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.118007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.118169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.118193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.118358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.118383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.118519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.118543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.118730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.118757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.118977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.119002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.119171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.119196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.119388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.119412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.586 [2024-07-13 20:22:10.119570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.586 [2024-07-13 20:22:10.119595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.586 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.119741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.119765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.119958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.119984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.120126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.120156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.120303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.120327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.120488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.120512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.120686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.120712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.120854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.120884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.121048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.121072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.121242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.121267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.121430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.121455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.121625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.121649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.121802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.121826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.121980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.122005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.122196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.122221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.122390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.122415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.122580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.122604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.122780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.122805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.122970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.122996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.123183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.123209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.123375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.123401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.123565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.123589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.123725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.123750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.123923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.123949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.124092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.124116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.124286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.124310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.124454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.124478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.124645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.124669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.124816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.124840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.125017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.125042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.125220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.125248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.125419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.125443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.125611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.125636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.125780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.125805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.125978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.126003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.126195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.587 [2024-07-13 20:22:10.126220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.587 qpair failed and we were unable to recover it. 00:34:22.587 [2024-07-13 20:22:10.126395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.126419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.126576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.126601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.126793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.126820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.127016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.127042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.127210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.127235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.127401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.127425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.127567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.127593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.127761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.127787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.127963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.127989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.128135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.128159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.128305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.128329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.128522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.128547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.128718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.128743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.128915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.128940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.129112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.129137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.129303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.129327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.129488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.129512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.129700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.129724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.129892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.129917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.130060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.130085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.130284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.130308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.130443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.130467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.130604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.130628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.130791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.130815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.131007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.131032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.131172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.131196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.131363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.131388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.131525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.131550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.131716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.131740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.131910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.131937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.132099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.132124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.132257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.132281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.132471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.132496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.132657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.132682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.132842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.132872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.133017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.133042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.133204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.133229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.133394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.133418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.133558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.133582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.133747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.133772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.133938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.133964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.588 qpair failed and we were unable to recover it. 00:34:22.588 [2024-07-13 20:22:10.134122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.588 [2024-07-13 20:22:10.134147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.134291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.134315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.134486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.134510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.134645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.134669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.134800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.134824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.135001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.135027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.135166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.135190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.135353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.135377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.135552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.135578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.135721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.135748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.135915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.135941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.136105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.136130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.136295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.136320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.136488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.136512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.136681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.136706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.136877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.136902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.137066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.137091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.137255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.137280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.137449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.137473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.137663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.137688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.137855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.137889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.138055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.138084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.138228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.138253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.138396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.138421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.138567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.138592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.138764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.138789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.138963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.138989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.139155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.139179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.139344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.139368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.139508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.139533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.139671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.139695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.139859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.139891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.140033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.140059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.140233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.140257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.140449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.140473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.140639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.140663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.140820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.140845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.141019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.141045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.141211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.141236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.141398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.141423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.141562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.141586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.141731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.589 [2024-07-13 20:22:10.141756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.589 qpair failed and we were unable to recover it. 00:34:22.589 [2024-07-13 20:22:10.141942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.141967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.142109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.142134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.142271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.142296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.142437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.142461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.142623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.142647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.142782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.142807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.142972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.143002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.143144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.143169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.143340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.143365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.143502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.143527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.143673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.143697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.143829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.143854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.144021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.144046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.144185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.144209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.144377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.144402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.144548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.144573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.144738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.144762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.144934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.144960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.145099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.145124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.145288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.145312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.145492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.145516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.145679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.145704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.145898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.145923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.146065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.146089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.146263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.146288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.146454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.146479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.146610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.146634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.146797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.146820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.146960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.146986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.147177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.147201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.147344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.147369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.147535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.147559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.147721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.147745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.147912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.147956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.148129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.148154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.148288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.148313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.148477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.148501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.148674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.148699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.148836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.148860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.149037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.149062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.590 [2024-07-13 20:22:10.149204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.590 [2024-07-13 20:22:10.149229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.590 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.149419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.149444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.149612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.149637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.149832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.149857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.150013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.150037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.150208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.150233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.150422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.150447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.150592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.150617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.150811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.150835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.150990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.151015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.151186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.151211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.151341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.151365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.151527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.151551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.151682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.151706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.151876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.151900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.152068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.152092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.152254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.152279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.152448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.152472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.152634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.152659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.152849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.152880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.153052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.153076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.153227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.153252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.153397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.153420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.153565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.153590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.153759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.153784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.153951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.153977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.154141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.154166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.154360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.154385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.154553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.154577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.154769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.154794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.154955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.154981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.155145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.155170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.155341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.155366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.155536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.155561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.155726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.155766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.155969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.155997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.156149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.156175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.156375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.156400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.156571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.156597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.591 [2024-07-13 20:22:10.156785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.591 [2024-07-13 20:22:10.156811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.591 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.157006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.157045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.157215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.157241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.157405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.157431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.157598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.157623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.157791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.157815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.157956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.157982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.158148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.158173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.158318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.158343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.158555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.158580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.158746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.158771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.158957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.158983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.159125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.159150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.159341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.159365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.159555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.159579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.159742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.159766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.159937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.159962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.160154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.160179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.160351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.160376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.160540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.160565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.160705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.160731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.160900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.160926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.161095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.161124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.161285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.161311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.161503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.161528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.161663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.161688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.161827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.161852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.162017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.162043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.162187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.162212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.162376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.162401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.162541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.162566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.162729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.162754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.162922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.162948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.163085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.163110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.163245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.163270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.163412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.163437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.163589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.163614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.163806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.163831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.164000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.164025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.164193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.164218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.164357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.164382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.164552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.164578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.164748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.164773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.164920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.164945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.165082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.165107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.592 [2024-07-13 20:22:10.165270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.592 [2024-07-13 20:22:10.165295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.592 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.165460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.165485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.165648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.165673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.165840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.165870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.166048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.166078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.166224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.166248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.166389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.166415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.166579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.166604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.166770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.166795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.166973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.167000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.167141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.167165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.167336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.167360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.167554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.167579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.167767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.167792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.167944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.167969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.168111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.168135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.168301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.168326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.168487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.168511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.168655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.168680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.168828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.168852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.169026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.169051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.169192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.169216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.169359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.169384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.169551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.169575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.169740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.169781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.170011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.170050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.170225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.170252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.170399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.170425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.170596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.170621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.170764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.170789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.170935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.170963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.171108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.171139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.171281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.171306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.171469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.171494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.171665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.171690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.171829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.171854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.172011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.172038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.172207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.172233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.172376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.172401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.172594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.172620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.172811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.172836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.173005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.173031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.173177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.173202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.173375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.173400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.173574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.593 [2024-07-13 20:22:10.173600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.593 qpair failed and we were unable to recover it. 00:34:22.593 [2024-07-13 20:22:10.173750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.173776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.173953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.173978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.174147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.174172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.174342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.174367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.174562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.174587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.174754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.174779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.174932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.174958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.175108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.175133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.175302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.175327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.175470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.175495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.175665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.175690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.175921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.175946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.176113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.176140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.176349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.176375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.176525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.176550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.176716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.176740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.176907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.176933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.177102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.177128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.177295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.177322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.177497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.177523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.177713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.177739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.177885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.177912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.178078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.178103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.178268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.178293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.178488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.178513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.178661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.178686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.178884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.178915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.179062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.179087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.179284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.179310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.179480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.179505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.179637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.179662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.179832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.179856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.180035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.180062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.180236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.180261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.180423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.180448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.180613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.180640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.180785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.180810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.181004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.181030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.181198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.181224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.181357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.181382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.181586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.181611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.181753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.181778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.181923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.181949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.182113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.182138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.594 [2024-07-13 20:22:10.182330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.594 [2024-07-13 20:22:10.182356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.594 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.182550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.182575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.182742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.182767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.182904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.182931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.183099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.183124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.183260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.183286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.183469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.183494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.183661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.183686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.183869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.183895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.184065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.184091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.184273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.184298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.184437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.184462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.184604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.184629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.184798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.184822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.185007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.185033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.185194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.185219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.185383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.185408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.185573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.185598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.185758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.185783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.185943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.185969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.186140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.186165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.186357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.186382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.186546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.186575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.186755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.186780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.186932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.186959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.187126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.187151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.187286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.187311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.187478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.187504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.187644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.187671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.187843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.187880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.188031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.188057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.188199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.188224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.188423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.188448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.188617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.188641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.188835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.188860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.189044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.189069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.189244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.189269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.595 [2024-07-13 20:22:10.189430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.595 [2024-07-13 20:22:10.189455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.595 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.189616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.189641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.189834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.189859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.190067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.190092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.190256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.190281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.190441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.190465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.190602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.190627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.190820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.190844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.191026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.191052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.191217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.191242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.191424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.191450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.191614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.191640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.191834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.191859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.192047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.192072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.192234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.192259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.596 [2024-07-13 20:22:10.192404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.596 [2024-07-13 20:22:10.192431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.596 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.192604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.192630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.192823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.192848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.193036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.193062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.193262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.193287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.193458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.193483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.193676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.193701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.193875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.193913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.194079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.194104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.194240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.194266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.194462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.194492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.194636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.194662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.194811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.194836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.195036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.195062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.195211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.195238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.195454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.195480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.195652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.195678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.195846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.195879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.196073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.196099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.196231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.196256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.196401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.196426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.196592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.196618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.196789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.877 [2024-07-13 20:22:10.196814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.877 qpair failed and we were unable to recover it. 00:34:22.877 [2024-07-13 20:22:10.196984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.197010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.197165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.197191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.197356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.197383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.197549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.197575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.197739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.197763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.197951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.197977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.198149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.198173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.198340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.198365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.198531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.198556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.198723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.198749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.198924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.198949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.199091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.199117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.199316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.199341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.199483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.199507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.199662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.199687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.199857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.199899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.200046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.200071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.200201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.200226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.200367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.200393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.200580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.200605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.200803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.200828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.201029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.201055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.201204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.201229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.201393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.201418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.201617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.201642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.201833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.201858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.202043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.202069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.202208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.202236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.202431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.202457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.202624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.202649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.202787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.202814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.202989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.203015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.203194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.203219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.203384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.203410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.203572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.203597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.203766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.203791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.203986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.204012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.204146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.204172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.204316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.204341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.204537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.204563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.878 [2024-07-13 20:22:10.204733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.878 [2024-07-13 20:22:10.204758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.878 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.204958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.204984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.205155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.205180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.205372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.205397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.205567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.205593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.205763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.205788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.205989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.206015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.206206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.206242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.206433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.206459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.206629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.206654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.206817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.206842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.207024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.207050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.207204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.207229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.207401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.207426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.207595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.207621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.207809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.207834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.208024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.208051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.208215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.208240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.208379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.208404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.208573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.208598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.208740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.208765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.208900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.208926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.209094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.209120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.209284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.209308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.209452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.209478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.209643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.209668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.209840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.209870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.210045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.210074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.210270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.210295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.210457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.210482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.210650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.210675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.210840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.210869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.211069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.211094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.211238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.211263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.211455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.211480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.211650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.211675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.211839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.211864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.212040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.212065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.212259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.212284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.212450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.212475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.212613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.212638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.879 qpair failed and we were unable to recover it. 00:34:22.879 [2024-07-13 20:22:10.212833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.879 [2024-07-13 20:22:10.212858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.213056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.213081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.213272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.213297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.213465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.213491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.213629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.213655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.213847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.213879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.214043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.214069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.214260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.214285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.214431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.214456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.214619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.214645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.214847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.214879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.215063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.215089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.215281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.215306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.215449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.215474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.215634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.215659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.215830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.215854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.216067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.216092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.216285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.216310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.216503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.216528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.216692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.216716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.216885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.216911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.217049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.217074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.217263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.217288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.217478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.217503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.217673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.217698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.217860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.217891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.218087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.218116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.218278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.218303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.218468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.218494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.218661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.218686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.218850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.218880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.219049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.219074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.219240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.219265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.219445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.219470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.219606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.219632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.219813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.219837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.220036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.220062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.220223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.220248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.220415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.220440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.220613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.220638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.880 [2024-07-13 20:22:10.220835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.880 [2024-07-13 20:22:10.220860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.880 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.221009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.221035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.221207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.221232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.221396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.221420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.221559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.221584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.221749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.221774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.221972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.221997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.222165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.222190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.222356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.222381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.222543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.222567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.222734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.222759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.222907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.222932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.223178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.223203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.223377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.223402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.223579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.223603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.223771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.223796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.223962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.223987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.224129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.224155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.224346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.224372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.224512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.224537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.224708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.224733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.224879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.224906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.225100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.225125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.225295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.225319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.225486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.225511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.225707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.225732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.225893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.225918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.226094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.226120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.226285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.226309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.226553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.226578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.881 [2024-07-13 20:22:10.226750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.881 [2024-07-13 20:22:10.226776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.881 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.226924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.226950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.227122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.227147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.227347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.227372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.227516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.227542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.227736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.227761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.227952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.227978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.228170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.228196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.228365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.228390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.228549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.228574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.228748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.228774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.228941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.228967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.229107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.229134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.229299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.229324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.229462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.229487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.229654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.229679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.229846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.229878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.230047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.230073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.230263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.230288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.230423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.230448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.230640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.230665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.230831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.230857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.231013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.231039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.231177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.231206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.231368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.231393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.231559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.231584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.231727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.231752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.231919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.231945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.232119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.232145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.232313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.232338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.232510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.232535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.232683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.232708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.232885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.232911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.233106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.233132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.233298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.233323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.233496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.233521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.233663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.233688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.233856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.233888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.234087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.234113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.234249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.234275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.234447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.234472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.882 [2024-07-13 20:22:10.234662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.882 [2024-07-13 20:22:10.234687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.882 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.234857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.234889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.235083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.235108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.235275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.235301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.235443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.235468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.235664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.235689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.235838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.235863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.236067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.236092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.236250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.236275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.236444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.236470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.236658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.236683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.236818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.236844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.237009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.237048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.237273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.237305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.237506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.237544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.237722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.237748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.237944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.237970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.238139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.238164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.238357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.238384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.238546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.238573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.238849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.238907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.239073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.239098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.239275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.239305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.239525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.239573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.239830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.239890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.240083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.240108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.240372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.240422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.240675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.240724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.240899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.240925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.241096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.241121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.241289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.241313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.241472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.241496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.241672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.241699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.241923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.241948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.242099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.242125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.242292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.242319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.242509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.242537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.242703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.242727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.242915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.242941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.243103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.243128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.243276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.243300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.883 qpair failed and we were unable to recover it. 00:34:22.883 [2024-07-13 20:22:10.243510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.883 [2024-07-13 20:22:10.243537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.243701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.243728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.243927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.243953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.244158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.244182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.244350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.244376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.244546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.244570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.244764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.244793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.244983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.245010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.245143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.245173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.245382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.245421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.245586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.245610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.245776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.245800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.245943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.245968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.246126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.246151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.246285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.246310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.246581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.246630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.246823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.246848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.247049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.247074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.247219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.247244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.247388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.247414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.247558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.247583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.247802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.247830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.248064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.248089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.248232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.248257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.248529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.248579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.248762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.248792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.248980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.249006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.249142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.249168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.249358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.249382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.249522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.249547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.249730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.249758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.249920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.249946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.250110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.250135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.250323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.250350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.250558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.250586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.250746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.250771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.250967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.250993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.251134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.251159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.251328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.251354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.251586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.251637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.251849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.251882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.884 [2024-07-13 20:22:10.252075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.884 [2024-07-13 20:22:10.252100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.884 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.252334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.252383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.252529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.252556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.252764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.252792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.252959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.252985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.253154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.253179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.253349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.253374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.253621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.253665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.253882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.253926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.254121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.254147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.254283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.254308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.254504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.254529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.254695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.254722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.254891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.254929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.255098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.255123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.255259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.255284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.255556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.255606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.255793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.255822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.256018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.256044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.256215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.256240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.256411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.256435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.256602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.256627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.256830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.256858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.257024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.257049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.257205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.257230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.257459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.257512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.257667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.257695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.257838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.257872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.258038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.258063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.258200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.258225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.258358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.258382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.258572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.258599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.258776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.258803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.259004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.259030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.259191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.259216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.259355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.259384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.259579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.259605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.259798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.259823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.259999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.260025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.260192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.260217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.260353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.260378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.260550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.885 [2024-07-13 20:22:10.260575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.885 qpair failed and we were unable to recover it. 00:34:22.885 [2024-07-13 20:22:10.260710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.260735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.260908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.260934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.261093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.261117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.261286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.261310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.261469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.261494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.261659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.261684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.261827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.261851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.262013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.262038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.262230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.262255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.262402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.262427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.262626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.262651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.262823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.262847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.263002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.263027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.263219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.263243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.263378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.263403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.263563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.263588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.263755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.263779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.263951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.263977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.264146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.264171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.264303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.264328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.264473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.264502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.264666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.264691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.264835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.264860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.265039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.265064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.265237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.265262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.265425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.265450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.265615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.265640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.265775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.265800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.265963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.265989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.266128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.266153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.266346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.266371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.266508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.266532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.266700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.266724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.266901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.266926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.267092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.886 [2024-07-13 20:22:10.267117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.886 qpair failed and we were unable to recover it. 00:34:22.886 [2024-07-13 20:22:10.267291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.267315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.267510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.267535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.267671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.267695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.267863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.267894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.268057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.268082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.268285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.268309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.268474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.268499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.268635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.268660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.268833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.268858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.269031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.269055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.269221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.269245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.269438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.269462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.269597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.269621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.269794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.269819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.269987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.270013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.270179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.270204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.270397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.270422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.270569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.270593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.270788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.270813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.270982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.271008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.271176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.271200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.271362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.271387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.271579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.271603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.271769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.271793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.271986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.272012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.272151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.272176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.272373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.272398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.272569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.272594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.272739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.272763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.272894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.272919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.273082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.273107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.273267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.273292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.273462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.273487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.273677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.273702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.273892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.273918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.274083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.274108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.274268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.274293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.274452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.274477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.274635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.274660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.274792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.274817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.274994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.275020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.887 qpair failed and we were unable to recover it. 00:34:22.887 [2024-07-13 20:22:10.275154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.887 [2024-07-13 20:22:10.275180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.275328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.275353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.275521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.275545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.275712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.275738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.275911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.275939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.276083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.276109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.276254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.276279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.276451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.276476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.276616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.276640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.276770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.276794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.276958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.276984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.277132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.277157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.277320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.277348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.277513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.277538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.277730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.277755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.277929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.277955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.278086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.278111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.278247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.278272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.278399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.278424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.278595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.278620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.278761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.278786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.278965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.278991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.279141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.279166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.279305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.279330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.279479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.279503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.279672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.279697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.279851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.279883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.280017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.280042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.280205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.280229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.280388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.280413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.280581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.280606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.280776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.280801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.280970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.280996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.281169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.281194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.281363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.281388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.281551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.281576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.281767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.281792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.281985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.282011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.282147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.282172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.282318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.282346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.282490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.282517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.888 [2024-07-13 20:22:10.282683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.888 [2024-07-13 20:22:10.282708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.888 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.282843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.282873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.283066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.283091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.283222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.283247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.283412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.283437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.283602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.283627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.283773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.283798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.283992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.284018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.284165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.284189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.284357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.284382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.284573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.284598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.284732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.284757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.284902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.284928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.285099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.285124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.285296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.285321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.285458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.285483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.285678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.285703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.285875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.285901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.286051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.286076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.286263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.286288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.286481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.286506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.286670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.286695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.286842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.286873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.287015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.287040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.287232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.287257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.287398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.287426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.287619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.287644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.287813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.287838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.288006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.288032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.288196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.288221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.288360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.288385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.288554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.288580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.288716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.288740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.288903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.288929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.289101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.289126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.289296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.289322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.289494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.289519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.289688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.289713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.289879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.289904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.290095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.290121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.290313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.290338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.290510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.889 [2024-07-13 20:22:10.290535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.889 qpair failed and we were unable to recover it. 00:34:22.889 [2024-07-13 20:22:10.290696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.290721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.290890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.290915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.291081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.291107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.291279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.291304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.291494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.291519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.291692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.291716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.291860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.291891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.292064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.292089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.292254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.292279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.292447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.292471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.292642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.292667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.292840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.292884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.293087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.293113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.293281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.293306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.293476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.293503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.293676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.293701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.293875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.293901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.294069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.294096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.294235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.294260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.294402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.294426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.294597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.294622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.294817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.294843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.294991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.295017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.295160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.295184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.295343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.295372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.295540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.295565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.295734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.295759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.295928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.295954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.296155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.296180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.296343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.296368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.296536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.296560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.296732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.296757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.296949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.296975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.297139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.297163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.297351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.297375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.890 [2024-07-13 20:22:10.297542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.890 [2024-07-13 20:22:10.297566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.890 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.297734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.297758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.297950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.297975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.298143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.298167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.298356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.298380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.298524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.298548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.298710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.298734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.298895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.298920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.299088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.299113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.299281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.299305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.299466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.299490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.299650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.299675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.299835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.299872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.300058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.300083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.300229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.300253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.300423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.300448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.300618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.300649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.300789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.300814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.300987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.301013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.301180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.301205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.301366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.301390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.301561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.301585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.301736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.301760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.301925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.301951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.302092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.302117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.302279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.302304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.302497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.302522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.302684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.302709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.302849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.302878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.303050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.303075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.303214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.303240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.303375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.303400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.303564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.303589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.303763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.303787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.303982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.304008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.304147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.304172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.304330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.304355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.304492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.304517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.304709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.304734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.304891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.304917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.305049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.305074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.305210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.891 [2024-07-13 20:22:10.305235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.891 qpair failed and we were unable to recover it. 00:34:22.891 [2024-07-13 20:22:10.305378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.305402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.305535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.305564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.305728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.305753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.305917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.305943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.306110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.306135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.306300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.306325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.306462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.306487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.306645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.306670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.306834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.306859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.307010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.307035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.307206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.307231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.307360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.307384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.307549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.307574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.307746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.307771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.307913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.307939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.308091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.308117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.308252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.308277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.308435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.308459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.308632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.308656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.308831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.308855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.309027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.309052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.309213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.309238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.309429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.309454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.309618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.309643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.309837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.309862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.310020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.310045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.310177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.310202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.310373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.310397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.310557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.310582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.310747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.310772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.310944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.310970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.311166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.311191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.311334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.311358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.311526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.311551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.311719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.311744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.311878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.311904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.312097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.312122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.312261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.312285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.892 qpair failed and we were unable to recover it. 00:34:22.892 [2024-07-13 20:22:10.312443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.892 [2024-07-13 20:22:10.312468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.312631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.312656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.312825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.312850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.313022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.313047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.313239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.313264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.313403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.313428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.313629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.313653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.313794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.313818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.313984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.314009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.314198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.314223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.314394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.314418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.314584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.314608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.314777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.314802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.315004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.315030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.315199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.315224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.315399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.315424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.315618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.315643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.315820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.315845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.315984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.316009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.316184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.316209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.316377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.316401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.316540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.316565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.316734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.316758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.316891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.316917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.317109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.317134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.317301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.317326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.317486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.317511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.317677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.317702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.317897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.317922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.318117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.318142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.318281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.318306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.318445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.318474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.318646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.318671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.318834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.318858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.319011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.319036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.319203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.319227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.319364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.319388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.319554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.319579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.319740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.319765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.319907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.319933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.320074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.893 [2024-07-13 20:22:10.320101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.893 qpair failed and we were unable to recover it. 00:34:22.893 [2024-07-13 20:22:10.320265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.320290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.320458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.320482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.320675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.320700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.320852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.320885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.321051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.321077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.321251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.321275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.321440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.321465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.321602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.321627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.321796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.321821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.321982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.322008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.322157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.322181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.322350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.322375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.322547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.322571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.322742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.322767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.322909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.322936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.323082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.323107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.323274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.323299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.323445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.323475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.323650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.323675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.323841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.323886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.324089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.324114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.324256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.324281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.324452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.324477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.324651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.324676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.324820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.324845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.325015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.325041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.325244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.325269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.325440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.325465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.325625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.325649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.325815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.325840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.325988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.326014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.326162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.326187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.326353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.894 [2024-07-13 20:22:10.326377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.894 qpair failed and we were unable to recover it. 00:34:22.894 [2024-07-13 20:22:10.326525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.326550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.326683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.326707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.326910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.326936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.327130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.327155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.327296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.327321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.327515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.327540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.327707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.327731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.327870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.327897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.328067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.328092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.328260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.328285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.328448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.328473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.328639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.328670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.328835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.328859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.328998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.329023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.329193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.329219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.329356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.329382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.329555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.329580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.329778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.329802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.329942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.329967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.330133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.330158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.330291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.330316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.330477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.330502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.330671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.330696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.330861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.330893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.331093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.331118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.331284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.331309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.331481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.331506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.331646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.331670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.331805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.331830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.331981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.332007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.332174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.332201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.332401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.332426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.332626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.332651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.332814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.332839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.332986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.333011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.333182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.333207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.333372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.333396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.333567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.333592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.333731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.333756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.333928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.333954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.334098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.895 [2024-07-13 20:22:10.334123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.895 qpair failed and we were unable to recover it. 00:34:22.895 [2024-07-13 20:22:10.334289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.334313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.334445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.334470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.334639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.334664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.334871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.334896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.335063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.335088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.335257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.335282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.335423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.335448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.335582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.335607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.335776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.335802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.335964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.335989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.336145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.336170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.336359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.336388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.336539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.336564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.336744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.336772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.336965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.336990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.337128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.337153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.337342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.337367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.337515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.337540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.337708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.337733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.337903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.337929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.338122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.338147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.338316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.338341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.338474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.338500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.338668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.338693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.338833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.338858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.339032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.339057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.339201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.339226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.339370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.339394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.339558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.339583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.339743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.339768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.339905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.339930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.340125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.340150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.340282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.340307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.340476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.340501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.340646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.340670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.340839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.340864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.341040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.341065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.341238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.341263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.341431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.341460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.341638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.341663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.341811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.341836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.896 qpair failed and we were unable to recover it. 00:34:22.896 [2024-07-13 20:22:10.342048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.896 [2024-07-13 20:22:10.342073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.342233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.342257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.342428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.342453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.342617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.342641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.342821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.342848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.343041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.343066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.343236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.343261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.343469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.343493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.343664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.343689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.343857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.343888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.344060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.344084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.344222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.344247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.344444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.344468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.344630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.344655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.344822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.344848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.345048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.345073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.345201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.345226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.345394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.345418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.345566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.345590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.345723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.345748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.345886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.345912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.346051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.346076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.346217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.346242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.346376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.346401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.346533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.346562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.346700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.346725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.346899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.346925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.347073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.347098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.347267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.347292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.347455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.347479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.347640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.347665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.347837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.347862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.348036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.348061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.348229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.348253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.348451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.348475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.348621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.348645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.348818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.348843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.348981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.349006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.349209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.349234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.349378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.349403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.349567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.349592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.897 [2024-07-13 20:22:10.349766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.897 [2024-07-13 20:22:10.349791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.897 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.349959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.349985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.350145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.350169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.350357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.350382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.350539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.350564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.350702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.350727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.350928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.350953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.351114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.351139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.351333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.351357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.351550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.351575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.351745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.351770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.351972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.351997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.352187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.352212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.352370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.352395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.352538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.352564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.352737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.352762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.352930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.352956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.353114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.353138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.353309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.353334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.353514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.353538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.353712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.353737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.353907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.353933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.354106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.354131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.354299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.354323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.354542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.354568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.354711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.354736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.354875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.354901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.355037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.355062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.355258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.355282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.355422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.355447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.355585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.355610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.355780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.355805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.356013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.356038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.356188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.356213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.356357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.356382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.356549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.356574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.356741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.356766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.356925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.356950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.357102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.357127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.357292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.357317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.357490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.898 [2024-07-13 20:22:10.357515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.898 qpair failed and we were unable to recover it. 00:34:22.898 [2024-07-13 20:22:10.357681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.357706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.357850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.357881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.358043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.358067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.358208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.358233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.358425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.358450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.358581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.358605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.358775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.358800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.358934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.358960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.359138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.359163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.359331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.359356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.359489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.359518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.359680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.359705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.359842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.359872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.360051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.360076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.360242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.360267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.360418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.360443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.360606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.360631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.360798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.360823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.360975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.361002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.361152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.361178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.361347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.361373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.361538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.361563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.361728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.361756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.361949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.361975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.362117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.362141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.362309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.899 [2024-07-13 20:22:10.362350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.899 qpair failed and we were unable to recover it. 00:34:22.899 [2024-07-13 20:22:10.362641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.362690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.362879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.362923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.363069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.363094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.363241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.363266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.363414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.363438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.363581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.363605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.363741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.363765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.363963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.363988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.364155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.364178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.364345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.364371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.364530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.364555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.364720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.364749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.364925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.364951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.365085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.365110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.365280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.365304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.365463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.365487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.365619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.365644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.365789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.365813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.365958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.365984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.366153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.366178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.366315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.366340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.366530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.366554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.366727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.366752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.366901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.366926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.367071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.367094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.367236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.367261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.367428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.367452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.367623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.367648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.367812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.367836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.368016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.368042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.368182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.368206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.368358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.368383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.368524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.368548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.368707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.368732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.368891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.368917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.369084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.369109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.369277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.369302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.369492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.369516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.369654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.369682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.369846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.369875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.370014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.370039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.900 [2024-07-13 20:22:10.370182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.900 [2024-07-13 20:22:10.370207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.900 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.370377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.370401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.370571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.370595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.370734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.370757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.370893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.370920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.371070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.371095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.371286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.371310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.371446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.371472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.371641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.371666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.371807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.371832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.372009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.372035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.372194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.372218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.372355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.372382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.372554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.372579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.372745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.372770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.372945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.372970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.373111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.373135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.373303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.373327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.373496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.373521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.373690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.373715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.373912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.373937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.374080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.374106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.374255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.374280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.374448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.374473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.374648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.374673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.374845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.374876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.375015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.375040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.375190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.375215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.375386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.375411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.375571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.375596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.375734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.375759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.375927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.375953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.376122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.376149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.376321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.376347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.376509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.376534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.376703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.376728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.376900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.376925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.377090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.377116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.377283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.377314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.377485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.377511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.377653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.377679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.901 [2024-07-13 20:22:10.377851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.901 [2024-07-13 20:22:10.377882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.901 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.378033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.378059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.378227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.378251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.378442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.378466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.378602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.378627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.378796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.378821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.378962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.378988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.379147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.379172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.379305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.379330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.379471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.379496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.379694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.379718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.379862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.379895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.380063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.380089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.380249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.380274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.380419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.380444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.380611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.380636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.380803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.380828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.380992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.381018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.381182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.381207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.381395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.381420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.381581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.381606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.381775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.381801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.381966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.381993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.382151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.382176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.382325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.382354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.382547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.382572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.382739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.382764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.382934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.382960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.383093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.383118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.383283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.383308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.383454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.383480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.383652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.383677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.383848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.383878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.384011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.384036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.384202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.384227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.384391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.384415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.384581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.384606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.902 qpair failed and we were unable to recover it. 00:34:22.902 [2024-07-13 20:22:10.384777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.902 [2024-07-13 20:22:10.384801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.384976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.385002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.385169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.385194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.385356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.385381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.385544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.385568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.385753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.385778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.385946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.385973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.386118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.386143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.386308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.386333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.386602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.386642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.386787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.386812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.386972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.386997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.387195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.387220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.387414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.387441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.387605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.387634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.387803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.387827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.388002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.388027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.388222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.388250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.388435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.388477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.388647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.388672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.388840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.388871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.389043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.389068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.389205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.389229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.389368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.389393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.389559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.389584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.389723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.389748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.389915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.389941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.390090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.390115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.390283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.390308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.390503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.390546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.390826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.390851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.391011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.391036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.391232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.391257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.391427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.391455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.391711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.903 [2024-07-13 20:22:10.391735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.903 qpair failed and we were unable to recover it. 00:34:22.903 [2024-07-13 20:22:10.391928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.391954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.392120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.392144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.392341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.392369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.392554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.392582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.392794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.392819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.392993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.393018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.393186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.393211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.393345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.393370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.393538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.393563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.393759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.393784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.393948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.393973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.394114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.394139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.394314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.394340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.394500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.394528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.394759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.394784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.394934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.394960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.395132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.395157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.395320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.395344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.395638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.395663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.395857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.395888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.396040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.396065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.396230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.396254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.396494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.396518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.396683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.396708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.396881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.396906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.397055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.397080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.397265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.397293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.397474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.397501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.397713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.397738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.397939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.397965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.398115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.398140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.398303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.398329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.398465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.398489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.398685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.398709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.398882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.398909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.399072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.399097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.399268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.399293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.399438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.399463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.399633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.399675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.399816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.399842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.399985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.904 [2024-07-13 20:22:10.400012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.904 qpair failed and we were unable to recover it. 00:34:22.904 [2024-07-13 20:22:10.400178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.400203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.400366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.400390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.400521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.400546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.400709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.400734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.400879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.400905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.401053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.401079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.401223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.401252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.401413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.401438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.401588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.401613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.401808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.401833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.401999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.402026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.402158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.402183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.402376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.402401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.402538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.402563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.402748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.402777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.402937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.402964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.403130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.403155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.403315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.403340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.403484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.403509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.403694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.403721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.403925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.403951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.404127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.404152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.404319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.404356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.404516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.404541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.404711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.404736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.404880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.404906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.405076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.405101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.405241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.405266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.405458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.405483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.405617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.405642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.405837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.405862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.406012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.406037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.406193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.406217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.406421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.406450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.406622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.406647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.406816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.406840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.407047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.407073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.407242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.407267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.407451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.407478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.407661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.407689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.407859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.407890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.905 qpair failed and we were unable to recover it. 00:34:22.905 [2024-07-13 20:22:10.408092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.905 [2024-07-13 20:22:10.408117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.408302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.408332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.408522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.408547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.408740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.408765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.408914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.408943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.409136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.409161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.409377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.409405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.409560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.409587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.409745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.409770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.409923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.409949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.410155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.410185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.410339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.410364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.410534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.410561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.410758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.410784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.410930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.410955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.411100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.411125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.411312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.411337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.411507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.411532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.411665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.411690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.411832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.411871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.412043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.412069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.412235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.412270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.412422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.412450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.412609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.412634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.412799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.412824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.412981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.413007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.413199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.413224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.413387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.413413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.413582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.413607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.413770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.413796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.413974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.414000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.414189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.414217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.414366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.414391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.414539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.414564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.414707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.414732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.414897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.414923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.415091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.415116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.415275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.415299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.415489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.415517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.415709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.415734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.415876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.415902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.416069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.906 [2024-07-13 20:22:10.416093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.906 qpair failed and we were unable to recover it. 00:34:22.906 [2024-07-13 20:22:10.416262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.416287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.416417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.416442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.416583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.416608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.416769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.416794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.416965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.416991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.417185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.417213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.417484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.417538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.417697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.417725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.417903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.417930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.418101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.418126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.418312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.418337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.418508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.418533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.418702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.418727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.418949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.418975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.419141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.419171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.419356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.419384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.419539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.419566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.419753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.419781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.419970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.420001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.420174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.420199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.420389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.420417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.420618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.420663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.420884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.420909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.421054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.421079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.421256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.421283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.421460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.421487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.421695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.421723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.421954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.421980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.422164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.422191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.422400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.422425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.422595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.422621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.422842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.422876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.423044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.423072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.423245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.423272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.423490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.423514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.423699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.423727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.423911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.423939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.424124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.424151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.424341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.424366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.424608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.907 [2024-07-13 20:22:10.424654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.907 qpair failed and we were unable to recover it. 00:34:22.907 [2024-07-13 20:22:10.424847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.424877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.425051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.425076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.425273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.425297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.425488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.425516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.425676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.425703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.425893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.425923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.426095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.426120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.426333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.426361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.426578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.426605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.426784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.426811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.427000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.427026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.427169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.427208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.427387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.427415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.427634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.427659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.427854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.427883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.428103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.428130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.428289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.428316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.428541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.428566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.428712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.428736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.428908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.428933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.429147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.429175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.429330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.429358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.429553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.429578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.429774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.429799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.429964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.429993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.430209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.430234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.430397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.430422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.430624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.430673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.430915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.430956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.431123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.431165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.431353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.431378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.431598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.431646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.431843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.431881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.432071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.432098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.432266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.908 [2024-07-13 20:22:10.432291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.908 qpair failed and we were unable to recover it. 00:34:22.908 [2024-07-13 20:22:10.432461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.432486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.432666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.432694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.432913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.432939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.433103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.433128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.433309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.433334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.433468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.433493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.433633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.433674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.433858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.433889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.434054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.434082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.434278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.434305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.434496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.434523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.434696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.434721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.434967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.434995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.435152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.435180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.435391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.435418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.435580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.435605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.435800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.435827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.436034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.436059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.436271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.436299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.436461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.436485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.436726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.436776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.436979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.437005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.437186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.437214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.437374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.437399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.437637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.437686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.437905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.437947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.438123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.438148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.438317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.438342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.438533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.438557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.438723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.438748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.438911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.438937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.439108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.439133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.439272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.439296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.439466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.439490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.439686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.439711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.439877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.439903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.440061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.440085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.440254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.440279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.909 [2024-07-13 20:22:10.440422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.909 [2024-07-13 20:22:10.440447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.909 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.440619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.440643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.440810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.440835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.441012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.441037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.441177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.441202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.441371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.441396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.441531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.441555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.441745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.441770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.441933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.441959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.442093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.442119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.442291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.442316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.442502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.442527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.442698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.442722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.442894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.442920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.443090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.443116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.443280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.443305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.443469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.443493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.443630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.443655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.443810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.443835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.444008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.444033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.444178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.444205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.444409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.444434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.444602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.444627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.444765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.444790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.444949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.444974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.445144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.445169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.445333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.445358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.445524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.445552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.445716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.445741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.445895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.445921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.446119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.446144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.446332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.446357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.446546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.446571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.446721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.446746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.446905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.446931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.447070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.447095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.447292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.447317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.447486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.447510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.447724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.447752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.447951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.447977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.448143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.448168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.448359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.910 [2024-07-13 20:22:10.448383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.910 qpair failed and we were unable to recover it. 00:34:22.910 [2024-07-13 20:22:10.448548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.448573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.448720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.448744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.448875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.448901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.449070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.449095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.449240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.449265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.449428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.449453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.449594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.449619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.449782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.449806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.449949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.449975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.450120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.450145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.450285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.450309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.450473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.450498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.450666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.450695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.450839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.450863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.451072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.451097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.451282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.451307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.451466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.451492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.451685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.451709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.451843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.451873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.452046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.452071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.452208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.452233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.452427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.452451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.452616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.452641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.452803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.452828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.452973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.452998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.453199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.453224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.453418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.453442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.453614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.453639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.453801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.453825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.454004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.454029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.454226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.454251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.454389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.454413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.454582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.454606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.454740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.454765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.454955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.454982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.455149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.911 [2024-07-13 20:22:10.455174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.911 qpair failed and we were unable to recover it. 00:34:22.911 [2024-07-13 20:22:10.455367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.455392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.455555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.455580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.455746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.455770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.455931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.455957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.456107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.456132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.456326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.456351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.456515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.456540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.456708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.456732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.456894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.456920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.457112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.457137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.457305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.457330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.457472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.457496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.457628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.457652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.457792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.457816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.457950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.457975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.458115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.458140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.458284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.458309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.458489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.458514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.458659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.458685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.458862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.458903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.459068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.459093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.459260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.459284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.459443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.459468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.459660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.459684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.459855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.459888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.460068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.460093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.460265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.460290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.460460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.460485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.460674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.460699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.460873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.460898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.461063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.461088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.461255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.461280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.461446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.912 [2024-07-13 20:22:10.461471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.912 qpair failed and we were unable to recover it. 00:34:22.912 [2024-07-13 20:22:10.461631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.461655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.461788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.461813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.461982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.462007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.462178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.462203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.462365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.462389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.462549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.462575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.462768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.462793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.462985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.463010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.463159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.463184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.463351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.463375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.463552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.463577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.463747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.463776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.463925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.463951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.464095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.464121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.464293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.464318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.464508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.464533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.464701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.464726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.464916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.464942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.465080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.465105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.465255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.465280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.465415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.465440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.465613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.465638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.465803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.465828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.466006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.466031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.466182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.466207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.466352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.466377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.466565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.466590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.466718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.466743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.466885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.466911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.467051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.467076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.467240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.467265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.467399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.467425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.467568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.467593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.467758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.467783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.467955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.467980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.468124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.468149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.468329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.468354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.468520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.468545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.468705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.468734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.468902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.468928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.469092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.913 [2024-07-13 20:22:10.469117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.913 qpair failed and we were unable to recover it. 00:34:22.913 [2024-07-13 20:22:10.469260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.469284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.469456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.469482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.469675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.469700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.469851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.469882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.470019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.470045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.470186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.470212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.470376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.470401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.470563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.470589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.470759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.470784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.470951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.470977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.471123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.471148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.471303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.471329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.471489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.471514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.471675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.471700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.471869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.471895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.472045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.472069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.472235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.472260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.472425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.472450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.472594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.472619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.472787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.472812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.472989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.473015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.473178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.473204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.473371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.473396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.473554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.473579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.473729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.473758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.473934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.473960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.474152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.474182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.474318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.474345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.474511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.474537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.474686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.474711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.474845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.474883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.475073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.475098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.475241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.475266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.475406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.475431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.475595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.475620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.475792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.475817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.475962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.475988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.476138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.476175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.476350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.476375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.476542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.476567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.476734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.476759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.914 qpair failed and we were unable to recover it. 00:34:22.914 [2024-07-13 20:22:10.476901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.914 [2024-07-13 20:22:10.476927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.477067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.477092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.477258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.477283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.477417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.477443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.477615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.477640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.477785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.477810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.477971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.477997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.478161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.478186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.478355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.478380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.478548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.478572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.478713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.478738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.478916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.478942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.479131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.479156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.479354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.479379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.479547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.479572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.479714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.479739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.479947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.479973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.480146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.480179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.480326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.480351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.480519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.480544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.480714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.480739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.480934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.480960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.481097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.481122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.481306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.481331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.481494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.481520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.481665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.481690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.481860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.481891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.482034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.482059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.482241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.482266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.482426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.482452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.482617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.482643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.482810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.482835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.483009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.483035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.483197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.483222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.483364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.483389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.483548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.483574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.483745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.483770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.483938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.483964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.484102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.484127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.484289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.484314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.915 qpair failed and we were unable to recover it. 00:34:22.915 [2024-07-13 20:22:10.484457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.915 [2024-07-13 20:22:10.484482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.484676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.484700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.484875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.484901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.485061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.485087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.485240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.485265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.485430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.485455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.485614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.485639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.485785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.485810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.485986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.486012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.486178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.486203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.486337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.486362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.486499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.486528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.486670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.486695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.486855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.486889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.487062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.487088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.487233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.487258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.487422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.487447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.487588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.487629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.487816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.487843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.488067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.488093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.488295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.488320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.488486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.488511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.488676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.488701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.488877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.488903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.489050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.489076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.489251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.489277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.489468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.489493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.489685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.489710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.489877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.489903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.490042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.490067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.490244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.490269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.490456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.490481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.490646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.490670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.490831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.490856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.491040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.491066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.491239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.491264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.491430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.491455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.916 [2024-07-13 20:22:10.491627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.916 [2024-07-13 20:22:10.491652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.916 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.491855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.491888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.492057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.492082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.492232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.492257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.492400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.492425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.492594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.492619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.492794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.492818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.492963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.492989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.493160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.493185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.493326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.493350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.493478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.493503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.493671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.493696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.493838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.493863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.494013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.494038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.494176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.494201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.494368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.494393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.494583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.494608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.494767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.494792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.494955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.494981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.495151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.495180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.495374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.495399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.495541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.495566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.495730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.495755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.495930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.495955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.496118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.496143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.496331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.496355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.496499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.496524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.496715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.496740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.496932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.496958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.497106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.497130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.497264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.497289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.497487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.497512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.497675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.497700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.497886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.497911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.498050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.498075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.498273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.498297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.498465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.498489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.498648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.498673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.498834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.498859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.499063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.499087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.499251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.499276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.499447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.499471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.499662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.499687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.499829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.499853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.500005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.917 [2024-07-13 20:22:10.500030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.917 qpair failed and we were unable to recover it. 00:34:22.917 [2024-07-13 20:22:10.500191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.500216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.500386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.500412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.500578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.500603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.500773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.500798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.500943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.500969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.501161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.501186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.501373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.501398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.501564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.501589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.501750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.501774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.501938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.501963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.502125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.502150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.502327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.502352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.502491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.502516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.502682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.502706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.502900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.502926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.503065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.503090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.503269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.503294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.503486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.503511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.503658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.503683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.503826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.503851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.504010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.504035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.504201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.504226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.504369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.504394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.504564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.504589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.504735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.504764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.504939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.504965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.505134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.505159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.505326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.505351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.505493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.505519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.505684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.505709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.505935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.505961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.506150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.506175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.506317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.506342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.506485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.506509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.506668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.506692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.506832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.506858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.507015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.507051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.507203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.507228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.507403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.507428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.507571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.507595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.507736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.507761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.507909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.507934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.508099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.508124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.508285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.508311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.508488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.508512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.508678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.508702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.508873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.508899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.509069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.918 [2024-07-13 20:22:10.509102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.918 qpair failed and we were unable to recover it. 00:34:22.918 [2024-07-13 20:22:10.509278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.509312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.509499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.509525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.509658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.509683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.509896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.509938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.510114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.510143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.510279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.510304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.510454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.510478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.510642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.510667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.510856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.510886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.511028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.511054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.511245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.511272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:22.919 [2024-07-13 20:22:10.511467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:22.919 [2024-07-13 20:22:10.511492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:22.919 qpair failed and we were unable to recover it. 00:34:23.201 [2024-07-13 20:22:10.511684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.201 [2024-07-13 20:22:10.511709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.201 qpair failed and we were unable to recover it. 00:34:23.201 [2024-07-13 20:22:10.511848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.201 [2024-07-13 20:22:10.511889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.201 qpair failed and we were unable to recover it. 00:34:23.201 [2024-07-13 20:22:10.512062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.201 [2024-07-13 20:22:10.512087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.201 qpair failed and we were unable to recover it. 00:34:23.201 [2024-07-13 20:22:10.512218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.201 [2024-07-13 20:22:10.512243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.201 qpair failed and we were unable to recover it. 00:34:23.201 [2024-07-13 20:22:10.512409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.201 [2024-07-13 20:22:10.512434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.201 qpair failed and we were unable to recover it. 00:34:23.201 [2024-07-13 20:22:10.512574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.512615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.512805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.512835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.513027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.513063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.513241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.513266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.513433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.513469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.513645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.513670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.513877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.513904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.514051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.514077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.514281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.514307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.514456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.514482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.514646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.514671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.514835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.514860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.515043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.515069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.515238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.515268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.515440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.515467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.515644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.515677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.515842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.515873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.516070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.516097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.516289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.516314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.516512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.516538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.516681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.516706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.516844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.516886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.517088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.517113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.517276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.517301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.517468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.517493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.517658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.517683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.517851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.517884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.518057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.518082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.518246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.518270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.518463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.518488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.518657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.518682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.518848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.518879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.519044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.519069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.202 [2024-07-13 20:22:10.519239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.202 [2024-07-13 20:22:10.519264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.202 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.519429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.519453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.519595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.519619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.519783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.519808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.519996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.520022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.520155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.520180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.520349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.520374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.520543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.520568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.520754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.520782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.520974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.520999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.521163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.521188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.521352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.521377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.521567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.521592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.521751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.521775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.521910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.521936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.522105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.522130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.522275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.522299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.522464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.522489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.522627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.522651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.522825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.522851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.523025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.523050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.523259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.523284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.523425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.523449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.523618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.523643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.523811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.523836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.524044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.524070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.524213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.524237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.524386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.524413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.524584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.524609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.524773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.524798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.524946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.524971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.525137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.525162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.525328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.525353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.525491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.525516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.525713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.203 [2024-07-13 20:22:10.525738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.203 qpair failed and we were unable to recover it. 00:34:23.203 [2024-07-13 20:22:10.525889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.525915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.526103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.526127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.526298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.526323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.526454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.526478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.526681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.526705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.526872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.526897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.527039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.527064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.527259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.527284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.527472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.527497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.527658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.527682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.527863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.527910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.528078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.528103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.528279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.528304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.528495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.528523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.528714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.528739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.528902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.528927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.529098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.529123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.529262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.529289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.529460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.529485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.529675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.529699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.529872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.529898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.530093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.530118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.530290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.530315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.530464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.530490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.530662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.530687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.530853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.530884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.531024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.531049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.531191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.531216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.531385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.531410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.531580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.531606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.531742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.531767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.531964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.531990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.532156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.532181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.532351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.532376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.532537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.532561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.532731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.532756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.532938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.532964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.533156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.533180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.533348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.204 [2024-07-13 20:22:10.533373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.204 qpair failed and we were unable to recover it. 00:34:23.204 [2024-07-13 20:22:10.533511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.533545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.533710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.533739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.533927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.533952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.534126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.534151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.534319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.534343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.534509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.534534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.534717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.534746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.534900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.534943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.535136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.535161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.535334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.535358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.535529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.535553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.535721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.535746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.535888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.535913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.536087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.536113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.536306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.536331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.536534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.536558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.536730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.536755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.536895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.536921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.537095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.537120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.537289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.537314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.537452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.537477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.537639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.537664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.537834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.537860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.538392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.538423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.538632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.538658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.538831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.538856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.539034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.539060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.539231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.539257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.539439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.539464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.539650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.539675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.539847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.539886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.540082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.540108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.540268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.540293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.540460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.540485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.540655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.540680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.540849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.540881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.541051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.541076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.541278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.205 [2024-07-13 20:22:10.541303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.205 qpair failed and we were unable to recover it. 00:34:23.205 [2024-07-13 20:22:10.541463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.541488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.541636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.541661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.541860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.541891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.542058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.542082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.542277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.542313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.542520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.542551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.542768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.542811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.542975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.543002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.543153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.543179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.543400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.543443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.543641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.543669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.543856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.543889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.544035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.544061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.544261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.544304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.544500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.544543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.544763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.544805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.544965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.544991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.545161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.545210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.545437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.545480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.545703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.545731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.545936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.545982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.546152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.546180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.546350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.546393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.546562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.546588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.546761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.546787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.547007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.547052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.547258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.547301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.547460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.547502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.547710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.206 [2024-07-13 20:22:10.547736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.206 qpair failed and we were unable to recover it. 00:34:23.206 [2024-07-13 20:22:10.547920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.547950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.548195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.548238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.548455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.548499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.548634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.548660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.548827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.548853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.549040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.549066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.549300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.549343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.549537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.549564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.549715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.549740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.549928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.549957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.550129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.550157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.550362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.550390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.550628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.550671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.550835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.550860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.551038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.551082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.551268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.551309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.551491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.551536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.551676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.551701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.551893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.551920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.552076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.552120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.552355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.552397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.552589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.552617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.552799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.552824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.552989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.553015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.553184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.553227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.553390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.553432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.553616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.553643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.553818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.553842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.553894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166b0f0 (9): Bad file descriptor 00:34:23.207 [2024-07-13 20:22:10.554114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.554152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.554413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.554442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.554690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.554739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.554940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.554968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.555158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.555187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.555406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.555434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.555615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.555644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.555850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.555886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.556071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.207 [2024-07-13 20:22:10.556096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.207 qpair failed and we were unable to recover it. 00:34:23.207 [2024-07-13 20:22:10.556263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.556291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.556516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.556545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.556715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.556760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.556980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.557005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.557164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.557198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.557388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.557418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.557628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.557656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.557820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.557848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.558043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.558068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.558223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.558249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.558441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.558469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.558671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.558720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.558901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.558927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.559089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.559114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.559283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.559311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.559564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.559613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.559787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.559812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.559970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.559995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.560165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.560207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.560425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.560470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.560655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.560682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.560832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.560859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.561036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.561076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.561243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.561273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.561552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.561598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.561788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.561819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.562029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.562057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.562208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.562233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.562492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.562539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.562716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.562745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.562937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.562964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.563110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.563136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.563335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.563364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.563667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.563719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.563887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.563941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.564090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.564117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.208 [2024-07-13 20:22:10.564292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.208 [2024-07-13 20:22:10.564318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.208 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.564558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.564608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.564788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.564817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.564998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.565025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.565171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.565198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.565422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.565450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.565689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.565715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.565890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.565918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.566089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.566120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.566312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.566342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.566541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.566590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.566775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.566804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.566982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.567009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.567207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.567233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.567409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.567438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.567627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.567653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.567874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.567903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.568069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.568097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.568296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.568322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.568471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.568497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.568646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.568672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.568875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.568918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.569113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.569154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.569338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.569367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.569527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.569553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.569734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.569763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.569924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.569954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.570172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.570198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.570420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.570445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.570612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.570638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.570780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.570805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.571000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.571029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.571211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.571241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.571442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.571467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.571712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.209 [2024-07-13 20:22:10.571761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.209 qpair failed and we were unable to recover it. 00:34:23.209 [2024-07-13 20:22:10.571951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.571980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.572169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.572195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.572382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.572410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.572572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.572600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.572793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.572819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.573033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.573076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.573245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.573276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.573461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.573486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.573712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.573759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.573920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.573950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.574166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.574191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.574338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.574364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.574498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.574523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.574688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.574719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.574906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.574936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.575126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.575151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.575323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.575349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.575588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.575635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.575788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.575816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.576011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.576037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.576204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.576230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.576447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.576475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.576663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.576688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.576878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.576922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.577094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.577119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.577294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.577321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.577491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.577546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.577798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.577837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.578064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.578101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.578321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.578358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.578570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.578607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.578829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.578885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.579127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.579178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.579365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.579394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.579576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.210 [2024-07-13 20:22:10.579601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.210 qpair failed and we were unable to recover it. 00:34:23.210 [2024-07-13 20:22:10.579746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.579771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.579950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.579987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.580178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.580215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.580487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.580549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.580790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.580831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.581061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.581100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.581284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.581312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.581480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.581523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.581743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.581794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.581982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.582021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.582224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.582251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.582423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.582449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.582682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.582727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.582950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.582976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.583121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.583147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.583365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.583394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.583605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.583634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.583806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.583835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.584041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.584075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.584276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.584322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.584533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.584578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.584787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.584832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.585008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.585034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.585260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.585314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.585492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.585535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.585711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.585755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.585947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.585974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.586161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.586213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.586373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.586416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.211 qpair failed and we were unable to recover it. 00:34:23.211 [2024-07-13 20:22:10.586589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.211 [2024-07-13 20:22:10.586632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.586828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.586855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.587062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.587105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.587323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.587368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.587602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.587647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.587788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.587814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.588036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.588080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.588246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.588290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.588484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.588512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.588725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.588751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.588938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.588983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.589154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.589180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.589377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.589425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.589656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.589681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.589848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.589879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.590101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.590146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.590325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.590373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.590564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.590606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.590774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.590799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.590991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.591036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.591266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.591309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.591544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.591589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.591733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.591761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.591953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.591997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.592174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.592217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.592406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.592450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.592646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.592672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.592875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.592903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.593102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.593147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.593377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.593426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.593602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.593651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.593848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.593882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.594080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.594109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.594346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.594390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.594584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.594613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.594798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.594824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.595003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.595029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.595202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.595245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.595467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.595510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.595700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.595729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.595937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.212 [2024-07-13 20:22:10.595985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.212 qpair failed and we were unable to recover it. 00:34:23.212 [2024-07-13 20:22:10.596163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.596206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.596429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.596472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.596698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.596745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.596929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.596957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.597178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.597221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.597443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.597487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.597658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.597685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.597854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.597885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.598079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.598124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.598359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.598402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.598619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.598661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.598830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.598857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.599023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.599049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.599267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.599310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.599542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.599584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.599758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.599784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.600008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.600053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.600243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.600294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.600529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.600573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.600768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.600793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.600952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.600997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.601181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.601208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.601405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.601449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.601646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.601688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.601851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.601882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.602077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.602120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.602315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.602359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.602557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.602599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.602758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.602793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.602963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.602990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.603180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.603209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.603363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.603391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.603548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.603577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.603846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.603910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.604092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.604118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.604304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.604347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.604574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.604617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.604757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.604783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.604952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.604979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.605151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.605194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.213 [2024-07-13 20:22:10.605413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.213 [2024-07-13 20:22:10.605456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.213 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.605650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.605694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.605895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.605923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.606089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.606114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.606329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.606372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.606535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.606578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.606746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.606771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.606944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.606971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.607107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.607134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.607331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.607358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.607557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.607601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.607771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.607796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.608007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.608051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.608235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.608282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.608477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.608521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.608674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.608706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.608910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.608937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.609109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.609152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.609368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.609395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.609573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.609616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.609781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.609806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.610018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.610062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.610249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.610292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.610499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.610547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.610681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.610706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.610885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.610911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.611127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.611170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.611366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.611409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.611612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.611658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.611863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.611899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.612063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.612088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.612280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.612324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.612540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.612585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.612778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.612803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.612969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.612997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.613199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.613226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.613422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.613465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.613633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.613679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.613849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.613881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.614082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.614126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.614272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.214 [2024-07-13 20:22:10.614299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.214 qpair failed and we were unable to recover it. 00:34:23.214 [2024-07-13 20:22:10.614526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.614569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.614718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.614744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.614931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.614977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.615171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.615216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.615386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.615430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.615648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.615690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.615882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.615908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.616102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.616148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.616346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.616389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.616612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.616655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.616828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.616854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.617051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.617093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.617287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.617330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.617494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.617549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.617748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.617778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.617946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.617990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.618204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.618231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.618460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.618504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.618673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.618700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.618875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.618902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.619071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.619098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.619267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.619310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.619535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.619579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.619744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.619769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.619938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.619964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.620150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.620196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.620390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.620443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.620677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.620722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.620946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.620992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.621162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.621214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.621384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.621413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.621638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.621669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.621839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.621871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.622033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.622079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.215 [2024-07-13 20:22:10.622298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.215 [2024-07-13 20:22:10.622341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.215 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.622538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.622582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.622753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.622780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.622973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.623019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.623212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.623257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.623461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.623509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.623682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.623711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.623888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.623942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.624144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.624187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.624356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.624400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.624571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.624599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.624796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.624824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.625040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.625087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.625242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.625284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.625453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.625498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.625669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.625699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.625870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.625897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.626120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.626149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.626386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.626429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.626600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.626643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.626841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.626881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.627078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.627129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.627334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.627395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.627597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.627639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.627817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.627853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.628037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.628083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.628268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.628311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.628533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.628580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.628773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.628799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.629016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.629059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.629251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.629296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.629495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.629523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.629692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.629719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.629918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.629966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.630177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.630220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.630419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.630463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.630630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.630656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.630799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.630825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.631025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.216 [2024-07-13 20:22:10.631071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.216 qpair failed and we were unable to recover it. 00:34:23.216 [2024-07-13 20:22:10.631299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.631343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.631542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.631589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.631761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.631788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.631981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.632025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.632218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.632249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.632461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.632503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.632698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.632727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.632922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.632953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.633249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.633278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.633442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.633470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.633631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.633659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.633872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.633897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.634086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.634110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.634295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.634322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.634539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.634583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.634746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.634773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.634999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.635025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.635195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.635219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.635409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.635436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.635638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.635666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.635858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.635890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.636062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.636087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.636262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.636287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.636502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.636529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.636734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.636781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.637000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.637025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.637194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.637219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.637409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.637437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.637614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.637642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.637926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.637952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.638090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.638132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.638344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.638371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.638597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.638641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.638851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.638885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.639071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.639096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.639312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.639371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.639582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.639628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.639802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.639827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.640010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.640038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.640231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.640275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.640469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.217 [2024-07-13 20:22:10.640512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.217 qpair failed and we were unable to recover it. 00:34:23.217 [2024-07-13 20:22:10.640734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.640779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.640944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.640971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.641167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.641211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.641411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.641458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.641673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.641715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.641855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.641886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.642054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.642082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.642301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.642349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.642565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.642608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.642797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.642823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.643034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.643062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.643250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.643293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.643510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.643536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.643704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.643729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.643901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.643929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.644128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.644169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.644364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.644407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.644579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.644606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.644802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.644828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.645021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.645066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.645256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.645299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.645504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.645550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.645746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.645772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.645969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.646015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.646178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.646221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.646450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.646492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.646664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.646690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.646837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.646863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.647029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.647083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.647292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.647336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.647552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.647594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.647786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.647814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.648013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.648056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.648276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.648319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.648516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.648560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.648733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.648760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.648979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.649023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.649210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.649253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.649422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.649466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.649661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.218 [2024-07-13 20:22:10.649690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.218 qpair failed and we were unable to recover it. 00:34:23.218 [2024-07-13 20:22:10.649889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.649920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.650147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.650189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.650355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.650402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.650624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.650667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.650870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.650897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.651113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.651142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.651389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.651433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.651603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.651645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.651821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.651847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.652063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.652105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.652305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.652335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.652519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.652548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.652722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.652768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.652971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.652998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.653186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.653216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.653447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.653475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.653666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.653694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.653882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.653912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.654097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.654122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.654317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.654345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.654548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.654578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.654840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.654871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.655046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.655072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.655251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.655278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.655550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.655595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.655815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.655843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.656078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.656103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.656273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.656298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.656537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.656586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.656738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.656765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.656952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.656979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.657126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.657151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.657293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.657318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.657505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.657533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.657716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.657749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.657952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.657979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.658121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.658145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.658360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.658388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.658602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.658630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.658806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.658835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.659024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.219 [2024-07-13 20:22:10.659050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.219 qpair failed and we were unable to recover it. 00:34:23.219 [2024-07-13 20:22:10.659221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.659251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.659494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.659545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.659706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.659734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.659891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.659938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.660085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.660110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.660325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.660353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.660560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.660606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.660800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.660828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.660989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.661015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.661180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.661208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.661400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.661426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.661571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.661612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.661800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.661827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.662027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.662053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.662224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.662249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.662429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.662457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.662671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.662698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.662883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.662927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.663070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.663096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.663269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.663294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.663485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.663514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.663670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.663699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.663894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.663920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.664092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.664117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.664283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.664308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.664482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.664522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.664736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.664764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.664967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.664992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.665156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.665181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.665344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.665371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.665582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.665610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.665786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.665814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.665991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.666017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.666175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.666217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.666401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.666426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.220 qpair failed and we were unable to recover it. 00:34:23.220 [2024-07-13 20:22:10.666576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.220 [2024-07-13 20:22:10.666606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.666817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.666845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.667048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.667073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.667217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.667242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.667411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.667436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.667622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.667650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.667801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.667829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.668006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.668033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.668191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.668219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.668463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.668514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.668699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.668724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.668892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.668919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.669055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.669081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.669293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.669321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.669536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.669584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.669798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.669827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.670022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.670048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.670186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.670211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.670368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.670397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.670553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.670583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.670842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.670873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.671008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.671035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.671200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.671228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.671417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.671442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.671593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.671618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.671763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.671789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.671988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.672014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.672228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.672257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.672442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.672470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.672683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.672708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.672850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.672885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.673058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.673084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.673255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.673281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.673426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.673452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.673638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.673667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.673878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.673904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.674092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.674120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.674279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.674307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.674494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.674523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.674708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.674735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.674893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.221 [2024-07-13 20:22:10.674922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.221 qpair failed and we were unable to recover it. 00:34:23.221 [2024-07-13 20:22:10.675112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.675137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.675326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.675354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.675566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.675594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.675773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.675797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.675938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.675964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.676095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.676119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.676346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.676371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.676558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.676586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.676766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.676794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.677013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.677038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.677220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.677249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.677471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.677499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.677686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.677711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.677883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.677912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.678104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.678128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.678319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.678344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.678510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.678534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.678720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.678748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.678944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.678970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.679157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.679185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.679367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.679394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.679582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.679608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.679793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.679821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.679988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.680014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.680211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.680237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.680404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.680433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.680616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.680643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.680829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.680854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.681028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.681054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.681215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.681242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.681423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.681447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.681660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.681710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.681907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.681933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.682098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.682123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.682281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.682308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.682488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.682515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.682671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.682697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.682913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.682946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.683144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.683169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.683361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.683386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.222 [2024-07-13 20:22:10.683532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.222 [2024-07-13 20:22:10.683556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.222 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.683717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.683742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.683944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.683970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.684138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.684180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.684369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.684397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.684582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.684606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.684796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.684823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.685005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.685031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.685278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.685303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.685502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.685528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.685736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.685764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.685930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.685956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.686168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.686196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.686379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.686407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.686581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.686606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.686817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.686845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.687103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.687129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.687303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.687327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.687533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.687582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.687772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.687797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.687962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.687988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.688151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.688178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.688357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.688385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.688543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.688569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.688785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.688813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.689033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.689059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.689230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.689256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.689470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.689498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.689663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.689707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.689876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.689903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.690071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.690096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.690344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.690369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.690566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.690591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.690784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.690811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.691004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.691031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.691175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.691200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.691430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.691479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.691673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.691702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.691843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.691873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.692091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.692119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.223 [2024-07-13 20:22:10.692304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.223 [2024-07-13 20:22:10.692331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.223 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.692550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.692576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.692764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.692791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.692979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.693004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.693168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.693193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.693378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.693405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.693563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.693591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.693806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.693830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.694023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.694053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.694267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.694295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.694455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.694481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.694674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.694702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.694887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.694916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.695111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.695136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.695273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.695298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.695481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.695509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.695760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.695786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.696050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.696079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.696235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.696263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.696446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.696471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.696639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.696667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.696848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.696881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.697097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.697122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.697314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.697342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.697542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.697570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.697755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.697780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.697968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.697996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.698175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.698202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.698389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.698413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.698602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.698631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.698813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.698841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.699059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.699085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.699283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.699331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.699552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.699577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.699831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.699859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.700088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.700113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.224 qpair failed and we were unable to recover it. 00:34:23.224 [2024-07-13 20:22:10.700283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.224 [2024-07-13 20:22:10.700310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.700497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.700526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.700742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.700770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.700952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.700981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.701141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.701166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.701309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.701334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.701473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.701498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.701686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.701712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.701903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.701931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.702107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.702135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.702348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.702373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.702531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.702559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.702726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.702753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.702943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.702969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.703126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.703153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.703336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.703365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.703553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.703578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.703740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.703767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.704024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.704053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.704275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.704300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.704533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.704582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.704766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.704794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.704979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.705005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.705189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.705216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.705407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.705433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.705594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.705619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.705779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.705807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.705990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.706018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.706188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.706213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.706415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.706443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.706669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.706694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.706863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.706893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.707083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.707112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.707334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.707359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.707529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.707555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.707711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.707738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.707933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.707959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.708132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.708157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.708409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.708457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.708610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.225 [2024-07-13 20:22:10.708639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.225 qpair failed and we were unable to recover it. 00:34:23.225 [2024-07-13 20:22:10.708800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.708825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.709006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.709036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.709230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.709256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.709426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.709451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.709704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.709754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.709968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.709997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.710163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.710188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.710353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.710379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.710557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.710586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.710755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.710781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.710942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.710968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.711146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.711174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.711330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.711355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.711499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.711542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.711695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.711722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.711921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.711947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.712118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.712143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.712331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.712359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.712545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.712570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.712705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.712731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.712876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.712918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.713114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.713139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.713303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.713331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.713480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.713508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.713699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.713724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.713954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.713983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.714170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.714199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.714415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.714440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.714634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.714663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.714876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.714919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.715085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.715112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.715319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.715347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.715562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.715590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.715754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.715779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.715943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.715969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.716132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.716160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.716321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.716347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.716489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.716530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.716738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.716766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.716928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.716955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.226 [2024-07-13 20:22:10.717128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.226 [2024-07-13 20:22:10.717153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.226 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.717373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.717406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.717620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.717644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.717792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.717817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.718011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.718037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.718209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.718234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.718459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.718487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.718693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.718721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.718914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.718941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.719136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.719161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.719359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.719386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.719543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.719568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.719738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.719764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.719903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.719928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.720075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.720101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.720322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.720350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.720509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.720537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.720709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.720738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.720935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.720962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.721128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.721169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.721358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.721383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.721543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.721571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.721791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.721816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.721963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.721990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.722184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.722212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.722394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.722421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.722615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.722640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.722832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.722856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.723074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.723100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.723269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.723294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.723446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.723474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.723629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.723657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.723840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.723869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.724067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.724095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.724253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.724282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.724460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.724484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.724632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.724658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.227 [2024-07-13 20:22:10.724849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.227 [2024-07-13 20:22:10.724880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.227 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.725052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.725079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.725293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.725321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.725509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.725537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.725723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.725752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.726004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.726031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.726236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.726264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.726449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.726474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.726689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.726718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.726886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.726917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.727106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.727132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.727342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.727370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.727576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.727604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.727783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.727808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.728019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.728049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.728196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.728223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.728415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.728440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.728641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.728669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.728886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.728915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.729100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.729125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.729339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.729389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.729577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.729604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.729785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.729813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.730065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.730091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.730312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.730340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.730501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.730526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.730689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.730713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.730908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.730936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.731127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.731152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.731323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.731365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.731624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.731651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.731848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.731879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.732046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.732071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.732268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.732296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.732509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.732534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.732775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.228 [2024-07-13 20:22:10.732823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.228 qpair failed and we were unable to recover it. 00:34:23.228 [2024-07-13 20:22:10.733021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.733046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.733218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.733243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.733414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.733440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.733632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.733660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.733820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.733846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.734065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.734094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.734274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.734302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.734463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.734488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.734702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.734735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.734945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.734975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.735153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.735178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.735336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.735364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.735548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.735576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.735762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.735787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.736012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.736041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.736196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.736224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.736408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.736433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.736627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.736655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.736839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.736872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.737034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.737059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.737248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.737275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.737426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.737454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.737620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.737645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.737834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.737862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.738029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.738055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.738216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.738241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.738411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.738436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.738624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.738651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.738841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.738872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.739088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.739116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.739273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.739301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.739492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.229 [2024-07-13 20:22:10.739516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.229 qpair failed and we were unable to recover it. 00:34:23.229 [2024-07-13 20:22:10.739656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.739681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.739818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.739843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.740044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.740070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.740286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.740330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.740530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.740562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.740759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.740785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.740945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.740975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.741136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.741166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.741381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.741407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.741652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.741703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.741861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.741896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.742086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.742111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.742296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.742325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.742540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.742568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.742754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.742780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.742966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.742996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.743161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.743192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.743362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.743388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.743603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.743657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.743873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.743902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.744091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.744117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.744276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.744302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.744482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.744510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.744726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.744751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.744910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.744936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.745149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.745178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.745407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.745433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.745645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.745698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.745886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.745929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.746068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.746094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.746315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.746344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.746500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.746529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.746692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.746719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.746925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.746955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.230 [2024-07-13 20:22:10.747174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.230 [2024-07-13 20:22:10.747200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.230 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.747374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.747400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.747672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.747721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.747932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.747962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.748175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.748201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.748407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.748435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.748623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.748652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.748832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.748857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.749029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.749057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.749236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.749264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.749427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.749454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.749590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.749632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.749787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.749815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.750005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.750031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.750244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.750272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.750452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.750480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.750699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.750724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.750918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.750948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.751131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.751160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.751341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.751367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.751517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.751547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.751765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.751793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.751951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.751983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.752161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.752186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.752371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.752399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.752576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.752602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.752737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.752763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.752911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.752954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.753170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.753196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.753498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.753556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.753764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.753792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.753979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.754006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.754152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.754178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.754370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.754395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.754576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.231 [2024-07-13 20:22:10.754602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.231 qpair failed and we were unable to recover it. 00:34:23.231 [2024-07-13 20:22:10.754779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.754809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.754994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.755020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.755199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.755224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.755483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.755534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.755745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.755773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.755967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.755992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.756139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.756164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.756346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.756373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.756558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.756583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.756774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.756802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.757014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.757042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.757231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.757256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.757401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.757426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.757620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.757645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.757822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.757849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.758024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.758050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.758234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.758263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.758453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.758478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.758672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.758721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.758913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.758943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.759133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.759159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.759404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.759454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.759646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.759674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.759849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.759886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.760149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.760177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.760363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.760391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.760548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.760574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.760782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.760810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.760990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.761017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.761218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.761244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.761565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.761613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.761819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.761847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.762017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.762042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.762212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.762238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.762402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.762427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.762588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.762613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.762750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.762775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.762914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.762940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.763107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.763133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.763299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.763324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.232 [2024-07-13 20:22:10.763468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.232 [2024-07-13 20:22:10.763495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.232 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.763748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.763773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.764000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.764028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.764185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.764217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.764434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.764460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.764695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.764747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.764937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.764964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.765131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.765156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.765343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.765374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.765588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.765616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.765773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.765799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.765940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.765966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.766110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.766152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.766348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.766373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.766540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.766572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.766757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.766785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.766972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.766998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.767187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.767215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.767403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.767431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.767628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.767653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.767843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.767878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.768081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.768110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.768296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.768320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.768511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.768539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.768722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.768749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.768968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.768994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.769157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.769185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.769367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.769394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.769582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.769607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.769753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.233 [2024-07-13 20:22:10.769778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.233 qpair failed and we were unable to recover it. 00:34:23.233 [2024-07-13 20:22:10.769972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.769998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.770210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.770235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.770445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.770473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.770659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.770687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.770854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.770886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.771053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.771078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.771292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.771319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.771508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.771533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.771678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.771704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.771933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.771969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.772145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.772170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.772315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.772340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.772501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.772526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.772720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.772745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.772972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.772998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.773218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.773246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.773433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.773459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.773695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.773746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.773938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.773966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.774132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.774157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.774346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.774374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.774563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.774591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.774798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.774823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.775004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.775034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.775213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.775245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.775428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.775453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.775629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.775654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.775877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.775906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.776093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.776119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.776264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.776289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.776502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.776530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.776709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.776735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.776958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.776987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.777199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.777227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.777422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.777447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.777614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.777639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.777830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.777855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.778029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.778054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.778247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.778275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.778433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.778461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.778625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.778650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.234 qpair failed and we were unable to recover it. 00:34:23.234 [2024-07-13 20:22:10.778827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.234 [2024-07-13 20:22:10.778855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.779025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.779051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.779216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.779241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.779443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.779494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.779705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.779733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.779936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.779962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.780128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.780153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.780342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.780370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.780537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.780562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.780703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.780728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.780915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.780944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.781158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.781183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.781389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.781417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.781566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.781593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.781778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.781802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.781942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.781968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.782175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.782203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.782391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.782417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.782601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.782628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.782890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.782918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.783103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.783128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.783341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.783370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.783541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.783569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.783767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.783797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.783988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.784017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.784232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.784257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.784450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.784476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.784637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.784662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.784845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.784882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.785137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.785162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.785381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.785409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.785669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.785696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.785931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.785957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.786105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.786130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.786336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.786363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.786526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.786552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.786717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.786761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.786955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.786984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.787165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.787190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.787334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.787359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.787620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.787648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.787856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.787886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.788083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.235 [2024-07-13 20:22:10.788111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.235 qpair failed and we were unable to recover it. 00:34:23.235 [2024-07-13 20:22:10.788272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.788300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.788490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.788515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.788701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.788729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.788911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.788939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.789130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.789155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.789333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.789360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.789545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.789572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.789742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.789767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.789947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.789976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.790128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.790156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.790333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.790358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.790508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.790533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.790666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.790691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.790859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.790890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.791101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.791129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.791342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.791367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.791533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.791558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.791721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.791749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.792003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.792032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.792224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.792249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.792406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.792438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.792628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.792656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.792913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.792954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.793088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.793113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.793309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.793337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.793521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.793546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.793736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.793764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.794019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.794048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.794263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.794288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.794484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.794513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.794723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.794750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.794923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.794948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.795083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.795110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.795250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.795277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.795481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.795506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.795696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.795724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.795883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.795912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.796076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.796101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.796279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.796307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.796487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.796514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.796666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.796691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.236 qpair failed and we were unable to recover it. 00:34:23.236 [2024-07-13 20:22:10.796829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.236 [2024-07-13 20:22:10.796874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.797057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.797086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.797303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.797328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.797542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.797570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.797736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.797760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.797955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.797981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.798156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.798184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.798362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.798389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.798581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.798607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.798746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.798772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.798986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.799014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.799209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.799234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.799481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.799531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.799745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.799773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.799939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.799965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.800139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.800164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.800358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.800386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.800572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.800597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.800788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.800816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.801002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.801033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.801202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.801228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.801372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.801398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.801550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.801575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.801741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.801766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.801927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.801954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.802112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.802140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.802356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.802381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.802546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.802572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.802758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.802788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.802962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.802988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.803182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.803210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.803404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.803429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.803594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.803619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.803839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.803872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.804058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.804086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.804285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.804310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.804469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.804496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.804653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.804681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.804882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.804908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.805053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.805078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.805214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.805239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.805408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.805433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.237 [2024-07-13 20:22:10.805568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.237 [2024-07-13 20:22:10.805593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.237 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.805785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.805813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.805985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.806011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.806218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.806246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.806447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.806476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.806663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.806688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.806876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.806919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.807070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.807095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.807231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.807256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.807453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.807504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.807653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.807682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.807875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.807901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.808094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.808121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.808311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.808338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.808523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.808548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.808739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.808768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.808985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.809012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.809154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.809184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.809378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.809403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.809622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.809650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.809803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.809828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.810028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.810057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.810276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.810301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.810470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.810495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.810625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.810650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.810787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.810813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.811029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.811054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.811225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.811251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.811412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.811439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.811649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.811674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.811841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.811876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.812096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.812124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.812310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.812335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.812480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.812507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.812645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.812686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.812901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.812943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.813110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.813136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.813362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.813389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.813581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.238 [2024-07-13 20:22:10.813606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.238 qpair failed and we were unable to recover it. 00:34:23.238 [2024-07-13 20:22:10.813767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.813795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.813977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.814006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.814169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.814196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.814388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.814414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.814557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.814599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.814794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.814820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.814987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.815029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.815214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.815242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.815411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.815438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.815634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.815658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.815859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.815893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.816077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.816102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.816316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.816344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.816552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.816580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.816745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.816771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.816939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.816968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.817120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.817148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.817332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.817357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.817535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.817567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.817758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.817784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.817957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.817982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.818179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.818207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.818416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.818444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.818627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.818651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.818821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.818846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.819013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.819038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.819205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.819231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.819423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.819451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.819625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.819653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.819817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.819844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.820031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.820058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.820313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.820340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.820504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.820530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.820741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.820769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.820962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.820991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.821177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.821202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.821377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.821405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.821583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.821610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.821801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.821827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.821994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.822022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.822237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.822265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.822431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.822457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.822657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.822685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.822912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.822941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.823108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.823135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.239 [2024-07-13 20:22:10.823331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.239 [2024-07-13 20:22:10.823359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.239 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.823623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.823651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.823841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.823870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.824018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.824043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.824200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.824225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.824358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.824383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.824551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.824578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.824794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.824822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.825019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.825044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.825214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.825240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.825504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.825532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.825736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.825761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.825930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.825960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.826152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.826185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.826390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.826415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.826561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.826586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.826752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.826777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.826973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.826999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.827173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.827201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.827390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.827414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.827576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.827602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.827758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.827787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.827999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.828031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.828234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.828260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.828460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.828495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.828693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.828728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.829005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.829043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.829273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.829303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.829500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.829526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.829722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.829748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.829891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.829928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.830099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.830135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.830406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.830443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.830642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.830679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.830880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.830909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.831054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.831080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.831220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.831245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.831394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.831430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.831607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.831644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.831835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.240 [2024-07-13 20:22:10.831878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.240 qpair failed and we were unable to recover it. 00:34:23.240 [2024-07-13 20:22:10.832100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.832140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.832364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.832404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.832614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.832650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.832845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.832891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.833064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.833099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.833261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.833289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.833459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.833485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.833627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.833652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.833833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.833906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.834091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.834127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.834327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.834362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.834555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.834590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.834780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.834819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.835003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.835036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.835218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.835252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.835398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.835424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.835587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.835622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.835789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.835824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.836001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.836036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.836204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.836240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.836463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.836500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.241 [2024-07-13 20:22:10.836703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.241 [2024-07-13 20:22:10.836729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.241 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-13 20:22:10.838535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-13 20:22:10.838573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-13 20:22:10.838778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-13 20:22:10.838815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.520 [2024-07-13 20:22:10.839119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.520 [2024-07-13 20:22:10.839166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.520 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.839388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.839427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.839613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.839652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.839880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.839907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.840084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.840110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.840332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.840381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.840601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.840637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.840809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.840845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.841133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.841162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.841315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.841341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.841497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.841523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.841689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.841726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.841925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.841962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.842157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.842196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.842416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.842453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.842624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.842660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.842892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.842928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.843103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.843138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.843339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.843367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.843540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.843566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.843759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.843785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.843990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.844025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.844192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.844224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.844416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.844453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.844664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.844693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.844861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.844896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.845046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.845072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.845256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.845298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.845570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.845606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.845802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.845843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.846049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.846087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.846245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.846273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.846444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.846470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.846620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.846646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.846813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.846839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.847025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.847052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.847248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.847275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.847444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.847470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.847639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.847676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.847844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.847875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.848019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.848044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.521 qpair failed and we were unable to recover it. 00:34:23.521 [2024-07-13 20:22:10.848209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.521 [2024-07-13 20:22:10.848234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.848375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.848401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.848579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.848605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.848743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.848768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.848941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.848968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.849111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.849138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.849321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.849346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.849544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.849570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.849755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.849783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.849983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.850010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.850182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.850208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.850406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.850435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.850597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.850623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.850824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.850850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.851025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.851052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.851230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.851256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.851424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.851450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.851615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.851642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.851805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.851831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.852020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.852048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.852217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.852255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.852398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.852423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.852593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.852619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.852763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.852788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.852962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.852989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.853151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.853186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.853344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.853373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.853553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.853580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.853748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.853784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.853977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.854004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.854181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.854207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.854351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.854376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.854521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.854547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.854710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.854736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.854911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.854938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.855084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.855110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.855280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.855306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.855471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.855497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.855661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.855686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.855822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.855847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.855999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.522 [2024-07-13 20:22:10.856026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.522 qpair failed and we were unable to recover it. 00:34:23.522 [2024-07-13 20:22:10.856196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.856222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.856371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.856397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.856606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.856634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.856837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.856891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.857084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.857110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.857294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.857319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.857456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.857481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.857614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.857639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.857808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.857832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.857990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.858015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.858180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.858205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.858370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.858395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.858538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.858573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.858748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.858772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.858911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.858942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.859078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.859103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.859250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.859275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.859410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.859436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.859606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.859631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.859796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.859821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.859988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.860013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.860203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.860228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.860416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.860441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.860605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.860629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.860899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.860925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.861116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.861142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.861293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.861318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.861571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.861597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.861775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.861800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.861972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.861997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.862141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.862166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.862340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.862365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.862506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.862532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.862700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.862725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.862932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.862958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.863162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.863187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.863341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.863366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.863536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.863561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.863717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.863741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.863940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.863965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.864107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.523 [2024-07-13 20:22:10.864133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.523 qpair failed and we were unable to recover it. 00:34:23.523 [2024-07-13 20:22:10.864277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.864320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.864469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.864494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.864676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.864700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.864870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.864904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.865100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.865125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.865288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.865313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.865484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.865509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.865673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.865698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.865841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.865872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.866701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.866733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.866951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.866981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.867172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.867198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.867393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.867421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.867638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.867667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.867830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.867855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.868007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.868032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.868192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.868217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.868357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.868382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.868548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.868573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.868744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.868769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.868930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.868955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.869123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.869148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.869290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.869315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.869453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.869478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.869664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.869689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.869831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.869855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.870005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.870030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.870171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.870196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.870344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.870370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.870514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.870538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.870675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.870700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.870837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.870880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.871026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.871051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.871240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.871265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.871415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.871440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.871633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.871658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.871829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.871854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.872040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.872065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.872234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.872259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.872423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.872448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.872581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.872607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.524 [2024-07-13 20:22:10.872774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.524 [2024-07-13 20:22:10.872809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.524 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.873001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.873027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.873168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.873193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.873366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.873391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.873537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.873564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.873726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.873754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.873937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.873963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.874140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.874174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.874316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.874341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.874508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.874533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.874701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.874727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.874875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.874902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.875075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.875100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.875270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.875295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.875468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.875493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.875668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.875693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.875863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.875903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.876047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.876071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.876209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.876234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.876372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.876397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.876532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.876557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.876727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.876763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.876934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.876959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.877146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.877171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.877339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.877365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.877531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.877555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.877724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.877749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.877881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.877911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.878095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.878119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.878282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.878307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.878474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.878500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.878668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.878694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.878891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.878917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.879089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.879114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.525 [2024-07-13 20:22:10.879289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.525 [2024-07-13 20:22:10.879313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.525 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.879510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.879535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.879680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.879712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.879848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.879879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.880047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.880074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.880219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.880244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.880435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.880467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.880612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.880638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.880822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.880850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.881027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.881053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.881196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.881221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.881406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.881430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.881578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.881603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.881795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.881819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.882026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.882052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.882218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.882242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.882422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.882447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.882658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.882683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.882824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.882849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.883023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.883048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.883196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.883225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.883399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.883424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.883619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.883645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.883801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.883826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.884030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.884055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.884229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.884255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.884395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.884420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.885276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.885316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.885570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.885598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.886577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.886619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.886821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.886847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.887651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.887683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.887886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.887930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.888103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.888131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.888307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.888332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.888508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.888534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.888690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.888726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.888888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.888915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.889064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.889089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.889262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.889288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.889466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.889491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.526 [2024-07-13 20:22:10.889712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.526 [2024-07-13 20:22:10.889738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.526 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.889904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.889930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.890101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.890127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.890304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.890330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.890476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.890501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.890701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.890726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.890876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.890902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.891104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.891130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.891278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.891303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.891472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.891497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.891694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.891720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.891871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.891897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.892043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.892068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.892214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.892239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.892412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.892437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.892596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.892620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.892770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.892795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.892952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.892979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.893143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.893169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.893334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.893360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.893527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.893553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.893713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.893738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.893904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.893929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.894122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.894147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.894321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.894346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.894516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.894542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.894705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.894731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.894904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.894930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.895065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.895089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.895393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.895421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.895623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.895648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.895889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.895915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.896080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.896106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.896259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.896284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.896482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.896515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.896688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.896713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.896858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.896889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.897057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.897082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.897222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.897248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.897440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.897466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.897605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.897630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.897798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.527 [2024-07-13 20:22:10.897823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.527 qpair failed and we were unable to recover it. 00:34:23.527 [2024-07-13 20:22:10.898001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.898029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.898227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.898252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.898394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.898419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.898560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.898585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.898762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.898787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.898924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.898954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.899104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.899140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.899359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.899384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.899533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.899560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.899752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.899786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.899969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.899995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.900167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.900193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.900332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.900356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.900496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.900521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.900691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.900717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.900860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.900891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.901088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.901114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.901259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.901285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.901470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.901495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.901661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.901687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.901858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.901888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.902058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.902083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.902291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.902316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.902463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.902488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.902655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.902680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.902808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.902832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.903006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.903032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.903182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.903207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.903384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.903409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.903547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.903571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.903736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.903762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.903922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.903948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.904116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.904160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.904357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.904381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.904540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.904599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.904814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.904842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.905009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.905034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.905169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.905202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.905385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.905412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.905600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.905625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.905823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.905850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.528 qpair failed and we were unable to recover it. 00:34:23.528 [2024-07-13 20:22:10.906079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.528 [2024-07-13 20:22:10.906106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.906278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.906304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.906496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.906530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.906678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.906702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.906875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.906901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.907065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.907091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.907286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.907314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.907508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.907533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.907718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.907745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.907932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.907958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.908129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.908154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.908301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.908329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.908545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.908570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.908759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.908784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.908919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.908945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.909120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.909145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.529 [2024-07-13 20:22:10.909321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.529 [2024-07-13 20:22:10.909352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.529 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.909554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.909606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.909794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.909826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.910021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.910047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.910222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.910247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.910400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.910428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.910595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.910620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.910804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.910832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.911006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.911031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.911179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.911204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.911362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.911403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.911545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.530 [2024-07-13 20:22:10.911573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.530 qpair failed and we were unable to recover it. 00:34:23.530 [2024-07-13 20:22:10.911821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.911852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.912083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.912108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.912301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.912328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.912518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.912544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.912768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.912795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.912988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.913014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.913160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.913191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.913361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.913396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.913607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.913635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.913826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.913850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.914025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.914050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.914248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.914276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.914465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.914490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.914629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.914654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.914819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.914876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.915038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.915062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.915251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.915278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.915453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.915481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.915677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.915702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.915925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.915951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.916096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.916122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.916261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.916286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.916502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.916531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.916712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.916740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.916937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.916963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.917133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.917158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.917337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.917364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.917597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.917621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.917785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.917812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.917978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.918003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.918171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.918195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.918353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.918381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.918571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.918598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.918789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.918813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.918972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.918998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.919172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.919215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.919392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.919419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.919565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.919591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.919724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.919750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.919943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.919969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.920139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.920164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.531 [2024-07-13 20:22:10.920337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.531 [2024-07-13 20:22:10.920362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.531 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.920519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.920555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.920782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.920807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.920976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.921001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.921151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.921176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.921350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.921375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.921567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.921592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.921823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.921851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.922057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.922082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.922248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.922275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.922492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.922518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.922665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.922690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.922826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.922850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.922994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.923019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.923209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.923238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.923465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.923493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.923709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.923734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.923898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.923944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.924114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.924139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.924356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.924381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.924598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.924650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.924829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.924857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.925030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.925055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.925245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.925273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.925442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.925467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.925630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.925655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.925874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.925917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.926052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.926077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.926221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.926246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.926417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.926441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.926663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.926721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.926923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.926949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.927097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.927122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.927314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.927348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.927537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.927562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.927747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.927775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.927968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.927994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.928158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.928183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.928376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.928405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.928593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.928621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.928813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.928838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.532 [2024-07-13 20:22:10.929016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.532 [2024-07-13 20:22:10.929041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.532 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.929204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.929229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.929376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.929401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.929590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.929621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.929841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.929876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.930040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.930065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.930288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.930315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.930517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.930542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.930678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.930703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.930839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.930880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.931024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.931049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.931215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.931240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.931410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.931434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.931575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.931618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.931805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.931833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.931997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.932023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.932192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.932233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.932395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.932420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.932621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.932675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.932836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.932873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.933065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.933090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.933280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.933308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.933469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.933497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.933683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.933707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.933872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.933901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.934080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.934105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.934252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.934277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.934445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.934471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.934694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.934719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.934890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.934916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.935088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.935113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.935317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.935345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.935534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.935559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.935705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.935730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.935878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.935913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.936056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.936081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.936274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.936302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.936457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.936485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.936698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.936728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.936879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.936904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.937074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.937099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.937264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.533 [2024-07-13 20:22:10.937289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.533 qpair failed and we were unable to recover it. 00:34:23.533 [2024-07-13 20:22:10.937491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.937515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.937691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.937716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.937961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.937987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.938129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.938153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.938364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.938391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.938569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.938594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.938812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.938839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.938999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.939028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.939212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.939246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.939424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.939448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.939638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.939666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.939849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.939888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.940034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.940060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.940230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.940255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.940395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.940420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.940609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.940637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.940800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.940828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.941004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.941030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.941198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.941223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.941390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.941415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.941551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.941576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.941736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.941761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.941957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.941986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.942171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.942195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.942343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.942367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.942529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.942554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.942744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.942769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.942926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.942955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.943141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.943168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.943358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.943386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.943576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.943604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.943778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.943805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.943991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.944018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.944221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.944249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.944443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.944470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.534 [2024-07-13 20:22:10.944628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.534 [2024-07-13 20:22:10.944653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.534 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.944836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.944863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.945045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.945073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.945268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.945293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.945485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.945512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.945718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.945746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.945971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.945998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.946169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.946194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.946364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.946389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.946556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.946582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.946779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.946807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.946999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.947028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.947217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.947252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.947421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.947446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.947602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.947641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.947826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.947851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.948046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.948074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.948221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.948248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.948434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.948459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.948647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.948675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.948859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.948892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.949079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.949107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.949299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.949327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.949542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.949588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.949803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.949828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.950003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.950028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.950227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.950255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.950445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.950470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.950630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.950658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.950807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.950834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.951005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.951030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.951240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.951268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.951451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.951478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.951664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.951689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.951834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.951859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.952014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.952038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.952235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.952260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.952424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.952449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.952629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.952656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.952894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.952937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.953128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.953169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.953352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.535 [2024-07-13 20:22:10.953377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.535 qpair failed and we were unable to recover it. 00:34:23.535 [2024-07-13 20:22:10.953519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.953544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.953701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.953728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.953912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.953938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.954131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.954156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.954324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.954351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.954520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.954545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.954714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.954742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.954931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.954959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.955144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.955172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.955381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.955406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.955555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.955579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.955776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.955803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.955996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.956021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.956200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.956225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.956357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.956382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.956519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.956544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.956756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.956784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.956983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.957009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.957202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.957226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.957411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.957438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.957622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.957650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.957845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.957879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.958039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.958064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.958234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.958261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.958480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.958505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.958647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.958672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.958841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.958870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.959039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.959064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.959248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.959276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.959528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.959579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.959791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.959816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.959961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.959986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.960132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.960157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.960341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.960366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.960567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.960592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.960783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.960808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.960972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.960997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.961194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.961221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.961400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.961427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.961613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.961638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.961832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.536 [2024-07-13 20:22:10.961860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.536 qpair failed and we were unable to recover it. 00:34:23.536 [2024-07-13 20:22:10.962054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.962079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.962247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.962272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.962453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.962478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.962664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.962692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.962883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.962908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.963098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.963126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.963324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.963352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.963521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.963547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.963713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.963755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.963944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.963973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.964164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.964189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.964354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.964379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.964590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.964616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.964761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.964786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.964937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.964963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.965153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.965177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.965366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.965391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.965607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.965635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.965790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.965818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.965991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.966017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.966161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.966188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.966386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.966414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.966634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.966660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.966855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.966889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.967050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.967077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.967285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.967310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.967521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.967549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.967735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.967763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.967953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.967980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.968143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.968185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.968337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.968365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.968545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.968569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.968723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.968750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.968912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.968947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.969118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.969143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.969330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.969358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.969506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.969533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.969701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.969726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.969941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.969969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.970158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.970185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.970372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.970397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.537 [2024-07-13 20:22:10.970577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.537 [2024-07-13 20:22:10.970617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.537 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.970812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.970842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.971018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.971045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.971215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.971241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.971379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.971404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.971576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.971606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.971787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.971813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.971999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.972039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.972266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.972293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.972454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.972484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.972671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.972702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.972859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.972893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.973059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.973085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.973248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.973276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.973442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.973467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.973660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.973686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.973898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.973929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.974101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.974127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.974317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.974344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.974524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.974573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.974822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.974850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.975041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.975066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.975229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.975271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.975430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.975454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.975588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.975632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.975811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.975839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.976062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.976087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.976228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.976253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.976442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.976507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.976696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.976721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.976912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.976941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.977092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.977120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.977300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.977325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.977473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.977497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.977666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.977690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.977835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.977860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.978006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.978031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.978199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.978224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.978391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.978416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.978597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.978625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.978776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.978803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.538 [2024-07-13 20:22:10.978985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.538 [2024-07-13 20:22:10.979010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.538 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.979167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.979195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.979384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.979411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.979599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.979623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.979805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.979832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.979995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.980028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.980189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.980214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.980380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.980422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.980675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.980723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.980911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.980938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.981151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.981179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.981431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.981478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.981667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.981692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.981880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.981908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.982068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.982097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.982314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.982339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.982485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.982510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.982674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.982699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.982884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.982926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.983104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.983131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.983356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.983386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.983551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.983576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.983771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.983800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.983977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.984006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.984210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.984237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.984402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.984436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.984637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.984666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.984847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.984879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.985074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.985104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.985297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.985326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.985545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.985571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.985719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.985744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.985883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.985913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.986053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.986078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.986266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.986296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.986479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.986525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.539 qpair failed and we were unable to recover it. 00:34:23.539 [2024-07-13 20:22:10.986763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.539 [2024-07-13 20:22:10.986790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.986976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.987007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.987220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.987248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.987426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.987452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.987615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.987642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.987862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.987892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.988063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.988089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.988257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.988282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.988415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.988440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.988608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.988645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.988846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.988880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.989067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.989097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.989299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.989324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.989509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.989535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.989709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.989734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.989931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.989958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.990102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.990127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.990320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.990345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.990482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.990506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.990645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.990670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.990831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.990888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.991078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.991103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.991258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.991285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.991496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.991521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.991663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.991687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.991877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.991906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.992064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.992092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.992310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.992335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.992496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.992523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.992758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.992802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.992987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.993013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.993201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.993229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.993416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.993444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.993623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.993648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.993790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.993815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.993989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.994015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.994148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.994171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.994337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.994366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.994528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.994552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.994714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.994739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.994951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.994980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.540 [2024-07-13 20:22:10.995166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.540 [2024-07-13 20:22:10.995194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.540 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.995377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.995402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.995614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.995641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.995794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.995821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.996019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.996045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.996235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.996260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.996392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.996434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.996621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.996645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.996817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.996842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.996987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.997012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.997149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.997174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.997384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.997412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.997586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.997614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.997795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.997820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.997986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.998012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.998164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.998194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.998373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.998398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.998536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.998560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.998728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.998753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.998956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.998981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.999142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.999167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.999311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.999335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.999474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.999499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.999643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.999671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:10.999836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:10.999861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.000061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.000086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.000258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.000283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.000451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.000476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.000647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.000671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.000830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.000855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.001031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.001056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.001192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.001217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.001383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.001407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.001569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.001593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.001728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.001753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.001947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.001973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.002143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.002168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.002335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.002361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.002507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.002532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.002689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.002714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.002884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.002910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.003088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.541 [2024-07-13 20:22:11.003113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.541 qpair failed and we were unable to recover it. 00:34:23.541 [2024-07-13 20:22:11.003282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.003307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.003441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.003466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.003657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.003682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.003855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.003885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.004027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.004052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.004219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.004244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.004433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.004458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.004625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.004650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.004808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.004836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.005001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.005026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.005191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.005216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.005381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.005405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.005574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.005599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.005797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.005822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.005993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.006019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.006189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.006214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.006344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.006370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.006533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.006558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.006755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.006780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.006979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.007004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.007173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.007198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.007361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.007386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.007580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.007606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.007793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.007818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.007984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.008010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.008175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.008200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.008372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.008397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.008564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.008589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.008756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.008781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.008945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.008972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.009134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.009159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.009349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.009374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.009536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.009561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.009724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.009748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.009913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.009939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.010103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.010128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.010330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.010355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.010524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.010549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.010718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.010742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.010937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.010964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.011131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.011156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.542 qpair failed and we were unable to recover it. 00:34:23.542 [2024-07-13 20:22:11.011325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.542 [2024-07-13 20:22:11.011350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.011508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.011533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.011671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.011696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.011861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.011891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.012084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.012109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.012274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.012299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.012461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.012486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.012646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.012670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.012839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.012863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.012998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.013023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.013216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.013241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.013381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.013406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.013533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.013558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.013702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.013727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.013890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.013915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.014083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.014109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.014284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.014309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.014476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.014501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.014635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.014660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.014820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.014848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.015091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.015116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.015287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.015312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.015488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.015515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.015656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.015681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.015850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.015882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.016051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.016076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.016239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.016264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.016406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.016431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.016599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.016624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.016792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.016817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.016978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.017004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.017144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.017171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.017341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.017366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.017533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.017558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.017693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.017718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.017908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.017937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.543 [2024-07-13 20:22:11.018072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.543 [2024-07-13 20:22:11.018098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.543 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.018270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.018294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.018452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.018476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.018613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.018638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.018804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.018831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.018969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.018994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.019184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.019209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.019379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.019404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.019545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.019569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.019737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.019761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.019902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.019929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.020072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.020098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.020260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.020285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.020492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.020517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.020663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.020689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.020886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.020912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.021074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.021099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.021293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.021318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.021453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.021478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.021619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.021644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.021833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.021858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.022062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.022101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.022275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.022302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.022474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.022501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.022668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.022693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.022862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.022897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.023072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.023105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.023247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.023272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.023443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.023469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.023663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.023689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.023834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.023860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.024043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.024069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.024234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.024260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.024421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.024447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.024588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.024614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.024811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.024836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.025017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.025043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.025214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.025239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.025433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.025458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.025626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.025651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.025825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.025850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.544 [2024-07-13 20:22:11.026034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.544 [2024-07-13 20:22:11.026061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.544 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.026258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.026284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.026452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.026477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.026672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.026697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.026889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.026917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.027090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.027115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.027285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.027311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.027484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.027510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.027705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.027730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.027879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.027907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.028078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.028104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.028266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.028291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.028467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.028493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.028656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.028681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.028849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.028880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.029048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.029074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.029267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.029292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.029457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.029483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.029627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.029653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.029820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.029846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.029988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.030014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.030181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.030207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.030378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.030404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.030570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.030597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.030767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.030793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.030989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.031020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.031161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.031187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.031358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.031383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.031577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.031603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.031740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.031765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.031934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.031960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.032127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.032153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.032318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.032343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.032516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.032543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.032709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.032734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.032907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.032934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.033095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.033120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.033309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.033335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.033510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.033536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.033688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.033714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.033844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.033874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.545 qpair failed and we were unable to recover it. 00:34:23.545 [2024-07-13 20:22:11.034071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.545 [2024-07-13 20:22:11.034096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.034267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.034292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.034491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.034517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.034688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.034713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.034879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.034907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.035103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.035130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.035296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.035321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.035458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.035484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.035650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.035675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.035836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.035862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.036010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.036036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.036235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.036260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.036408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.036433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.036598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.036624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.036759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.036784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.036956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.036982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.037153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.037180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.037343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.037368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.037533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.037558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.037722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.037747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.037890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.037916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.038085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.038111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.038271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.038296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.038466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.038491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.038632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.038661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.038859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.038894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.039078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.039103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.039284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.039309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.039470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.039495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.039683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.039708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.039886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.039912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.040077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.040102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.040262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.040287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.040433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.040459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.040650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.040675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.040811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.040837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.041036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.041062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.041242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.041267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.041430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.041455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.041634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.041659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.041829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.546 [2024-07-13 20:22:11.041854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.546 qpair failed and we were unable to recover it. 00:34:23.546 [2024-07-13 20:22:11.042025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.042051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.042195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.042221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.042391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.042417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.042588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.042614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.042752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.042778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.042912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.042938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.043088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.043113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.043287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.043312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.043506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.043531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.043721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.043746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.043886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.043913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.044108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.044134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.044276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.044303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.044483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.044509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.044648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.044675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.044812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.044838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.045033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.045060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.045230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.045255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.045425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.045450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.045622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.045647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.045839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.045875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.046055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.046081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.046213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.046239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.046430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.046460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.046627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.046653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.046794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.046820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.046996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.047023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.047220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.047246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.047413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.047438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.047599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.047624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.047767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.047793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.047974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.048001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.048141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.048169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.048359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.048384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.048551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.048577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.547 [2024-07-13 20:22:11.048720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.547 [2024-07-13 20:22:11.048746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.547 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.048923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.048948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.049142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.049168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.049334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.049360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.049540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.049566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.049733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.049758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.049921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.049947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.050117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.050142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.050322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.050347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.050540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.050565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.050730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.050756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.050937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.050963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.051121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.051146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.051313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.051339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.051535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.051561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.051759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.051788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.051981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.052008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.052186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.052211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.052379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.052405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.052583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.052609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.052776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.052801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.052995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.053021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.053151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.053176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.053354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.053380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.053573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.053598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.053741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.053766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.053934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.053961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.054102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.054127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.054298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.054333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.054516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.054541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.054704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.054730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.054872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.054898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.055043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.055068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.055264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.055290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.055432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.055457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.055593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.055618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.055813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.055839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.056013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.056039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.056205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.056230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.056421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.056447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.056582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.056610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.548 qpair failed and we were unable to recover it. 00:34:23.548 [2024-07-13 20:22:11.056770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.548 [2024-07-13 20:22:11.056795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.057002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.057029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.057222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.057247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.057442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.057468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.057639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.057666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.057860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.057891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.058035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.058061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.058195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.058222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.058358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.058385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.058556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.058582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.058775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.058801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.058967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.058993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.059183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.059208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.059412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.059437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.059611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.059637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.059840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.059870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.060017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.060043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.060237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.060262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.060407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.060433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.060601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.060626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.060792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.060818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.061016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.061042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.061214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.061240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.061420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.061445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.061588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.061613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.061804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.061830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.062008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.062034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.062196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.062225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.062384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.062409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.062598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.062623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.062812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.062837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.062988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.063015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.063156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.063181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.063352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.063378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.063535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.063560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.063729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.063755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.063897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.063924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.064085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.064110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.064282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.064308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.064499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.064525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.064716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.549 [2024-07-13 20:22:11.064741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.549 qpair failed and we were unable to recover it. 00:34:23.549 [2024-07-13 20:22:11.064907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.064933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.065078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.065103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.065271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.065296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3352924 Killed "${NVMF_APP[@]}" "$@" 00:34:23.550 [2024-07-13 20:22:11.065488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.065515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.065712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.065738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:23.550 [2024-07-13 20:22:11.065942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.065969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:23.550 [2024-07-13 20:22:11.066135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.066162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.550 [2024-07-13 20:22:11.066352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.066378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.066520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.066546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.066720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.066746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.066888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.066915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.067091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.067117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.067286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.067311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.067486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.067512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.067708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.067738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.067937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.067964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.068130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.068157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.068326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.068352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.068545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.068570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.068709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.068735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.068906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.068932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.069128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.069154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.069321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.069348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3353475 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3353475 00:34:23.550 [2024-07-13 20:22:11.069546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.069573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 3353475 ']' 00:34:23.550 [2024-07-13 20:22:11.069712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.069737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:23.550 [2024-07-13 20:22:11.069886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.069913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.550 [2024-07-13 20:22:11.070084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.070111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:23.550 [2024-07-13 20:22:11.070282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.070308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.070480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.070506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.070670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.070695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.070884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.070910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.071074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.071100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.071257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.071283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.071436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.550 [2024-07-13 20:22:11.071463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.550 qpair failed and we were unable to recover it. 00:34:23.550 [2024-07-13 20:22:11.071633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.071659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.071890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.071934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.072106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.072131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.072329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.072355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.072523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.072549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.072716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.072742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.072928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.072955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.073102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.073128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.073270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.073295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.073466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.073492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.073659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.073684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.073832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.073858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.074007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.074037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.074176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.074201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.074402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.074428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.074595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.074620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.074790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.074815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.074980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.075006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.075148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.075182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.075320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.075345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.075498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.075524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.075691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.075717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.075894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.075920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.076087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.076113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.076292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.076317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.076511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.076537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.076739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.076765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.076942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.076968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.077138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.077162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.077335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.077360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.077504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.077530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.077704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.077730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.077899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.077924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.078115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.078140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.078310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.078336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.078519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.078555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.078773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.078810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.079021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.079059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.551 qpair failed and we were unable to recover it. 00:34:23.551 [2024-07-13 20:22:11.079237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.551 [2024-07-13 20:22:11.079268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.079446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.079482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.079671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.079716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.079940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.079978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.080198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.080226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.080392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.080418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.080597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.080625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.080801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.080835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.081026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.081063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.081255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.081289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.081514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.081559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.081768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.081805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.082003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.082039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.082269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.082305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.082529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.082572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.082769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.082804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.083010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.083047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.083215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.083251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.083447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.083484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.083677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.083712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.083931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.083968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.084163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.084191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.084367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.084393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.084558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.084583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.084733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.084768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.084986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.085029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.085255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.085290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.085477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.085512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.085737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.085765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.085910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.085937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.086135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.086160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.086334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.086359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.086560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.086585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.086726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.086752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.086922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.086951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.087121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.087165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.087405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.087442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.087638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.087674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.552 qpair failed and we were unable to recover it. 00:34:23.552 [2024-07-13 20:22:11.087896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.552 [2024-07-13 20:22:11.087925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.088122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.088148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.088317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.088343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.088558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.088594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.088796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.088836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.089040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.089076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.089256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.089292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.089462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.089498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.089683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.089720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.089957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.089994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.090214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.090244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.090417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.090462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.090660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.090705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.090892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.090928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.091103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.091129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.091292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.091317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.091526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.091574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.091719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.091755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.091944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.091971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.092108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.092138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.092339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.092381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.092590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.092632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.092814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.092839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.093046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.093073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.093268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.093311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.093474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.093517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.093702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.093742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.093936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.093963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.094133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.094160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.094374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.094401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.094596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.094623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.094791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.094817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.094991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.095017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.095205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.095230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.095417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.095443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.095636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.095662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.095806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.095832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.095986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.096012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.096181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.096207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.096402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.096427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.096627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.553 [2024-07-13 20:22:11.096654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.553 qpair failed and we were unable to recover it. 00:34:23.553 [2024-07-13 20:22:11.096802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.096828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.096991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.097017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.097211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.097236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.097400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.097425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.097589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.097613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.097800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.097824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.097986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.098012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.098183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.098208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.098404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.098428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.098613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.098638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.098777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.098801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.099008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.099034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.099206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.099231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.099397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.099421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.099587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.099612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.099786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.099816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.099967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.099993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.100139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.100166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.100330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.100356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.100520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.100546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.100743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.100768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.100906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.100932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.101078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.101104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.101252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.101278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.101447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.101473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.101648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.101674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.101828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.101854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.102029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.102056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.102244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.102270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.102477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.102502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.102670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.102698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.102875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.102903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.103042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.103067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.103228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.103253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.103426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.103452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.103643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.103668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.103860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.103898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.104077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.104103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.104310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.104335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.104481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.104508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.554 [2024-07-13 20:22:11.104705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.554 [2024-07-13 20:22:11.104732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.554 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.104883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.104910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.105064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.105090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.105257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.105283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.105469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.105505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.105669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.105694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.105845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.105876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.106073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.106099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.106255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.106282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.106462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.106488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.106693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.106730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.106928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.106954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.107104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.107130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.107323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.107349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.107517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.107542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.107716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.107747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.107890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.107917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.108089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.108115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.108258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.108287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.108463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.108488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.108637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.108665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.108861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.108896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.109065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.109091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.109239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.109265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.109477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.109503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.109694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.109719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.109889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.109915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.110109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.110135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.110304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.110330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.110506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.110532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.110695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.110721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.110904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.110929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.111107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.111132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.111283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.111308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.111509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.111536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.111706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.111732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.111906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.111936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.112107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.112133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.112305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.112331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.112528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.112553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.112753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.555 [2024-07-13 20:22:11.112782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.555 qpair failed and we were unable to recover it. 00:34:23.555 [2024-07-13 20:22:11.112943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.112973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.113117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.113143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.113274] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:23.556 [2024-07-13 20:22:11.113338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.113347] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.556 [2024-07-13 20:22:11.113377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.113548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.113574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.113768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.113792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.113961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.113988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.114131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.114157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.114301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.114325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.114492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.114517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.114683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.114709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.114847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.114883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.115056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.115082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.115260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.115285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.115449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.115479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.115676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.115701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.115900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.115927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.116118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.116143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.116309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.116335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.116464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.116488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.116622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.116647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.116814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.116843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.117028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.117055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.117207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.117244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.117419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.117445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.117612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.117638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.117783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.117809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.117991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.118019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.118194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.118219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.118362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.118388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.118531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.118556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.118724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.118749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.118923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.118949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.119092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.119117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.119290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.119316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.119479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.119505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.119677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.119702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.556 qpair failed and we were unable to recover it. 00:34:23.556 [2024-07-13 20:22:11.119876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.556 [2024-07-13 20:22:11.119910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.120053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.120078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.120244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.120274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.120473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.120498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.120643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.120672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.120849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.120886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.121024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.121049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.121219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.121244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.121448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.121473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.121606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.121631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.121769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.121794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.122004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.122030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.122167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.122192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.122335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.122359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.122551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.122577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.122741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.122766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.122956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.122982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.123153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.123177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.123321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.123347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.123495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.123520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.123650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.123675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.123872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.123899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.124066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.124091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.124284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.124310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.124451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.124476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.124618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.124648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.124850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.124881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.125032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.125058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.125233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.125258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.125451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.125476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.125605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.125630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.125770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.125795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.125996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.126022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.126169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.126196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.126364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.126390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.126562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.126587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.126788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.126814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.126963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.126989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.127124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.127150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.127307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.127332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.127505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.127529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.127696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.127721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.127893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.127922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.128071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.128096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.128285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.557 [2024-07-13 20:22:11.128310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.557 qpair failed and we were unable to recover it. 00:34:23.557 [2024-07-13 20:22:11.128463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.128488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.128663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.128688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.128858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.128890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.129082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.129107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.129285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.129311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.129485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.129510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.129704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.129729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.129899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.129926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.130093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.130117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.130292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.130317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.130456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.130480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.130668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.130693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.130887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.130913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.131049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.131074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.131271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.131296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.131462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.131487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.131634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.131658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.131828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.131854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.132023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.132049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.132212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.132237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.132375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.132400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.132563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.132588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.132733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.132758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.132955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.132981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.133123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.133148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.133302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.133327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.133492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.133517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.133689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.133718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.133851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.133883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.134022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.134051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.134225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.134251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.134423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.134448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.134612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.134637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.134826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.134851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.135003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.135030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.135168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.135193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.135360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.135385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.135577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.135602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.135794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.135819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.135989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.136016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.136186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.136212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.136359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.136385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.136556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.136581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.136736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.136761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.558 qpair failed and we were unable to recover it. 00:34:23.558 [2024-07-13 20:22:11.136944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.558 [2024-07-13 20:22:11.136970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.137131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.137156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.137291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.137316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.137483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.137508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.137655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.137681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.137841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.137871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.138078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.138103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.138240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.138265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.138398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.138423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.138594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.138619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.138779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.138808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.139004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.139030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.139191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.139216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.139379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.139403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.139549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.139575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.139737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.139762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.139936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.139963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.140146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.140171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.140365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.140390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.140558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.140583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.140746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.140771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.140934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.140961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.141129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.141154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.141298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.141324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.141492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.141517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.141680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.141705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.141898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.141924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.142086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.142112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.142241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.142265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.142428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.142456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.142621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.142646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.142843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.142872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.143039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.143064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.143259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.143284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.143417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.143441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.143613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.143638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.143806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.143831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.144017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.144047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.144182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.144207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.144375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.144400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.144572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.144597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.144774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.144799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.144963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.144989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.145154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.145178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.145368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.145393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.145582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.559 [2024-07-13 20:22:11.145607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.559 qpair failed and we were unable to recover it. 00:34:23.559 [2024-07-13 20:22:11.145738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.145763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.145931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.145957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.146133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.146158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.146352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.146377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.146537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.146562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.146701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.146727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.146896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.146922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.147057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.147081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.147274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.147299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.147460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.147485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.147651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.147676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.147875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.147901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.148063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.148087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.148227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.148252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.148443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.148468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.148612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.148636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.148804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.148829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.149013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.149039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.149197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.149223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 EAL: No free 2048 kB hugepages reported on node 1 00:34:23.560 [2024-07-13 20:22:11.149386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.149412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.149589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.149615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.149754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.149779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.149968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.149994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.150166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.150191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.150363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.150387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.150533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.150558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.150705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.150730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.150899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.150924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.151114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.151139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.151326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.151351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.151517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.151543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.151686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.151724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.151946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.151973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.152173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.152198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.152358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.152386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.152528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.152552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.152688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.152715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.152934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.152961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.153122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.153146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.153295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.153320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.153485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.153510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.153675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.153701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.560 [2024-07-13 20:22:11.153881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.560 [2024-07-13 20:22:11.153912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.560 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.154080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.154106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.154259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.154284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.154440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.154465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.154626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.154652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.154849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.154881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.155073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.155098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.155250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.155282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.155480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.155507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.561 [2024-07-13 20:22:11.155660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.561 [2024-07-13 20:22:11.155686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.561 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-13 20:22:11.155834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-13 20:22:11.155864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-13 20:22:11.156043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-13 20:22:11.156068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-13 20:22:11.156240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-13 20:22:11.156265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.844 [2024-07-13 20:22:11.156458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.844 [2024-07-13 20:22:11.156483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.844 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.156655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.156680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.156880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.156906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.157071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.157095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.157270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.157297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.157473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.157510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.157687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.157713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.157895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.157922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.158067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.158092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.158286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.158313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.158458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.158483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.158643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.158675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.158837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.158863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.159043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.159069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.159238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.159263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.159466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.159493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.159662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.159689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.159854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.159885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.160056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.160081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.160253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.160279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.160425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.160451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.160659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.160685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.160856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.160889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.161059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.161085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.161305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.161331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.161495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.161520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.161689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.161716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.161856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.161888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.162049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.162074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.162203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.162228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.162363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.162388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.162557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.162587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.162732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.162758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.162927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.162953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.163091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.163117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.163287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.163312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.163474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.163499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.163692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.163718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.163922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.163948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.164087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.164111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.164283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.164308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.845 qpair failed and we were unable to recover it. 00:34:23.845 [2024-07-13 20:22:11.164476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.845 [2024-07-13 20:22:11.164501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.164673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.164698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.164871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.164896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.165066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.165092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.165275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.165300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.165490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.165515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.165662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.165687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.165857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.165888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.166026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.166051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.166220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.166245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.166404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.166429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.166596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.166621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.166784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.166808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.166966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.166992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.167181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.167206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.167349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.167375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.167543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.167568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.167701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.167729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.167902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.167928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.168122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.168147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.168314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.168338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.168477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.168502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.168670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.168696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.168844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.168873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.169042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.169067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.169214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.169239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.169409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.169434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.169572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.169597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.169756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.169791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.169969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.169995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.170186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.170211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.170377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.170402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.170543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.170568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.170741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.170766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.170914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.170941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.171135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.171161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.171327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.171352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.171549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.171574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.171737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.171761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.171900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.171926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.172072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.172098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.846 [2024-07-13 20:22:11.172289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.846 [2024-07-13 20:22:11.172314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.846 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.172476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.172501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.172669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.172694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.172842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.172877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.173079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.173104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.173269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.173294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.173487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.173512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.173657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.173682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.173812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.173837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.174022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.174048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.174219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.174244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.174416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.174441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.174642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.174667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.174815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.174840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.175017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.175042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.175187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.175212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.175379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.175403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.175551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.175575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.175715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.175740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.175888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.175914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.176077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.176102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.176279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.176305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.176476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.176501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.176665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.176689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.176829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.176854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.177032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.177057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.177187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.177212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.177352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.177377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.177547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.177573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.177713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.177738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.177909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.177934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.178106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.178131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.178325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.178350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.178516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.178542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.178719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.178744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.178919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.178945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.179146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.179171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.179337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.179362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.179511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.179536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.179702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.179726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.179877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.179902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.847 [2024-07-13 20:22:11.180048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.847 [2024-07-13 20:22:11.180073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.847 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.180248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.180273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.180406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.180431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.180648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.180688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.180863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.180897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.181068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.181093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.181241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.181266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.181440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.181467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.181643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.181668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.181836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.181871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.182016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.182041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.182174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.182201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.182343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.182369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.182537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.182562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.182702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.182728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.182913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.182951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.183124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.183151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.183297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.183322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.183482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.183507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.183676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.183701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.183888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.183914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.184078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.184102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.184301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.184325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.184485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.184510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.184671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.184696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.184890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.184916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.185074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.185099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.185275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.185301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.185429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.185454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.185630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.185655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.185870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.185909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.186105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.186132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.186322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.186348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.186517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.186543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.186739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.186765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.186942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.186968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.187150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.187175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.187344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.187369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.187531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.187556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.187749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.187774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.187939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.848 [2024-07-13 20:22:11.187965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.848 qpair failed and we were unable to recover it. 00:34:23.848 [2024-07-13 20:22:11.188139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.188164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.188334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.188360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.188542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.188572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.188768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.188793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.188939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.188965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.189161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.189186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.189362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.189387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.189528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.189553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.189716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.189741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.189904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.189929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.190122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.190147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.190336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.190361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.190525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.190550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.190743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.190768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.190946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.190973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.191175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.191201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.191404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.191429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.191570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.191596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.191763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.191787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.191959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.191985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.192155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.192181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.192349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.192374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.192511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.192536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.192676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.192702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.192838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.192864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.193036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.193061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.193253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.193278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.193434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.193460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.193624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.193649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.193891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.193931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.194157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.194197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.194382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.194410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.194587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.194613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.849 [2024-07-13 20:22:11.194763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.849 [2024-07-13 20:22:11.194790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.849 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.194991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.195018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.195163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.195188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.195348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.195374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.195568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.195594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.195790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.195815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.195987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.196014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.196183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.196210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.196372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.196399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.196572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.196601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.196772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.196798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.196946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.196971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.197140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.197166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.197329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.197354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.197527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.197552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.197708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.197734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.197906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.197932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.198100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.198125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.198300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.198325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.198492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.198517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.198688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.198716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.198871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.198897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.199068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.199094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.199245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.199271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.199443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.199470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.199616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.199641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.199818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.199846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.200023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.200049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.200243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.200268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.200437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.200464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.200624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.200649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.200822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.200848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.201032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.201057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.201219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.201245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.201386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.201411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.201558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.201584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.201784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.201810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.201978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.202005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.202193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.202218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.202385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.202411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.202557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.850 [2024-07-13 20:22:11.202583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.850 qpair failed and we were unable to recover it. 00:34:23.850 [2024-07-13 20:22:11.202739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.202764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.202906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.202934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.203078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.203103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.203249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.203274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.203442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.203467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.203607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.203632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.203821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.203846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.203986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:23.851 [2024-07-13 20:22:11.204014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.204039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.204215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.204244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.204412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.204437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.204579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.204604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.204791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.204816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.205011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.205050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.205228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.205255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.205433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.205460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.205658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.205684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.205855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.205888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.206063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.206089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.206259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.206285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.206424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.206450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.206589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.206615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.206814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.206839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.206996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.207023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.207199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.207226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.207418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.207444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.207587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.207613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.207787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.207813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.208008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.208036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.208174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.208200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.208372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.208399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.208592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.208618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.208785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.208810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.208980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.209007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.209178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.209204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.209353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.209379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.209551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.209577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.209714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.209739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.209913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.209938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.210109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.210135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.210331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.210356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.851 [2024-07-13 20:22:11.210506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.851 [2024-07-13 20:22:11.210533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.851 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.210704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.210729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.210924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.210950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.211120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.211145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.211337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.211363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.211551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.211576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.211744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.211770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.211940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.211967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.212140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.212169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.212348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.212373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.212536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.212561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.212707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.212732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.212924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.212950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.213225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.213251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.213462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.213487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.213655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.213680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.213847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.213885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.214059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.214084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.214259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.214285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.214456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.214481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.214652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.214678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.214850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.214884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.215034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.215059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.215200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.215226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.215392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.215417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.215618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.215643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.215782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.215808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.215983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.216010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.216161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.216187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.216324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.216350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.216494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.216519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.216691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.216716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.216870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.216896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.217061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.217087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.217233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.217259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.217439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.217464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.217628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.217653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.217798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.217827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.218006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.218033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.218182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.218208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.218403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.852 [2024-07-13 20:22:11.218429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.852 qpair failed and we were unable to recover it. 00:34:23.852 [2024-07-13 20:22:11.218593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.218618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.218766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.218791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.218975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.219001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.219166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.219191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.219362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.219388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.219585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.219610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.219774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.219799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.219962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.219992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.220208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.220234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.220404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.220437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.220614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.220639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.220779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.220806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.220958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.220984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.221153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.221178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.221342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.221368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.221505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.221530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.221719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.221744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.221935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.221961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.222133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.222158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.222331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.222356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.222496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.222522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.222693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.222718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.222882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.222909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.223081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.223106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.223304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.223329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.223498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.223524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.223697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.223723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.223904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.223931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.224081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.224106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.224277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.224302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.224444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.224469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.224618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.224644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.224840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.224870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.225016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.225042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.225213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.225239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.853 qpair failed and we were unable to recover it. 00:34:23.853 [2024-07-13 20:22:11.225438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.853 [2024-07-13 20:22:11.225463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.225609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.225636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.225805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.225831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.226003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.226028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.226197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.226222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.226391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.226416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.226583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.226609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.226801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.226826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.227002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.227028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.227219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.227244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.227388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.227413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.227612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.227638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.227801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.227831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.227982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.228009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.228179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.228205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.228403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.228428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.228572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.228598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.228769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.228795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.228988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.229014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.229210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.229235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.229372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.229397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.229539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.229564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.229738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.229764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.229955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.229982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.230156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.230181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.230341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.230366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.230519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.230544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.230711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.230738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.230934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.230960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.231119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.231144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.231307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.231332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.231525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.231550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.231715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.231740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.231884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.231910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.232051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.232077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.232207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.232233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.232426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.232451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.232649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.232674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.232876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.232902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.233049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.233074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.854 qpair failed and we were unable to recover it. 00:34:23.854 [2024-07-13 20:22:11.233245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.854 [2024-07-13 20:22:11.233270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.233467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.233492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.233656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.233682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.233887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.233914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.234087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.234112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.234284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.234310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.234475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.234501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.234639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.234665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.234808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.234833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.234981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.235007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.235203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.235228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.235399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.235424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.235560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.235591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.235781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.235806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.235980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.236006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.236175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.236200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.236392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.236417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.236558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.236583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.236723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.236748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.236939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.236965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.237111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.237138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.237302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.237329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.237494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.237519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.237658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.237684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.237824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.237849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.238051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.238077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.238232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.238258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.238454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.238479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.238671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.238697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.238830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.238856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.239030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.239055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.239214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.239239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.239407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.239433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.239601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.239626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.239764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.239789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.239959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.239986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.240154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.240179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.240350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.240376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.240545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.240571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.240713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.240738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.240910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.855 [2024-07-13 20:22:11.240936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.855 qpair failed and we were unable to recover it. 00:34:23.855 [2024-07-13 20:22:11.241071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.241097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.241292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.241318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.241490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.241515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.241654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.241679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.241845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.241874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.242011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.242037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.242237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.242261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.242419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.242444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.242609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.242634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.242795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.242820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.242983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.243009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.243177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.243206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.243374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.243399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.243595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.243621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.243786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.243811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.243970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.243997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.244134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.244160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.244295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.244320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.244513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.244538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.244730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.244755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.244898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.244924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.245119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.245144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.245306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.245332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.245520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.245546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.245716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.245741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.245880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.245906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.246067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.246092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.246284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.246309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.246446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.246472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.246640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.246665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.246833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.246859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.247025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.247051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.247223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.247247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.247418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.247444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.247613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.247637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.247809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.247834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.248009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.248036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.248200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.248225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.248426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.248451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.248597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.248622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.248816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.856 [2024-07-13 20:22:11.248841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.856 qpair failed and we were unable to recover it. 00:34:23.856 [2024-07-13 20:22:11.248987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.249013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.249180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.249205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.249343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.249368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.249571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.249596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.249757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.249783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.249951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.249977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.250122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.250147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.250343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.250368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.250560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.250584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.250726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.250751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.250918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.250949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.251098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.251123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.251268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.251295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.251487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.251512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.251651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.251677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.251846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.251877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.252044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.252070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.252209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.252235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.252423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.252449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.252640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.252665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.252854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.252898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.253068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.253094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.253263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.253288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.253423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.253448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.253617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.253642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.253807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.253832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.253984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.254012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.254180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.254205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.254369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.254394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.254557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.254582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.254782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.254807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.255009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.255035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.255183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.255208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.255354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.255379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.255550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.857 [2024-07-13 20:22:11.255576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.857 qpair failed and we were unable to recover it. 00:34:23.857 [2024-07-13 20:22:11.255728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.255753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.255892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.255917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.256065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.256090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.256254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.256280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.256446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.256472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.256664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.256690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.256861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.256892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.257037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.257063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.257250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.257275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.257419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.257444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.257617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.257642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.257811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.257836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.258005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.258030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.258219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.258245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.258405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.258430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.258601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.258630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.258767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.258793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.258960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.258987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.259157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.259182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.259345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.259370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.259535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.259560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.259728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.259753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.259924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.259951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.260116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.260141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.260284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.260309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.260448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.260474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.260638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.260665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.260860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.260891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.261062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.261087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.261258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.261283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.261446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.261471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.261636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.261661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.261853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.261903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.262071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.262096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.262268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.262294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.262487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.262513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.262712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.262737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.262906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.262932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.263097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.263123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.263265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.263292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.263461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.858 [2024-07-13 20:22:11.263487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.858 qpair failed and we were unable to recover it. 00:34:23.858 [2024-07-13 20:22:11.263660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.263685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.263893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.263936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.264119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.264146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.264319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.264345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.264492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.264518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.264670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.264698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.264902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.264929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.265105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.265131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.265305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.265331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.265527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.265553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.265700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.265727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.265939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.265966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.266108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.266134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.266309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.266334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.266505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.266536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.266714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.266739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.266922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.266949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.267117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.267143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.267317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.267343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.267487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.267512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.267679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.267704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.267902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.267928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.268096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.268122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.268297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.268323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.268466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.268492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.268677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.268703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.268880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.268906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.269081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.269107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.269279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.269308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.269485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.269511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.269678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.269704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.269906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.269932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.270081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.270106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.270301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.270327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.270469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.270495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.270695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.270720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.270892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.270918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.271088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.271113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.271283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.271310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.271482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.271507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.859 qpair failed and we were unable to recover it. 00:34:23.859 [2024-07-13 20:22:11.271677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.859 [2024-07-13 20:22:11.271702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.271907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.271933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.272083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.272108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.272302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.272327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.272490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.272516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.272683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.272708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.272876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.272902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.273066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.273091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.273237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.273263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.273432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.273457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.273599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.273624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.273796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.273822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.273985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.274026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.274210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.274237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.274435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.274467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.274669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.274695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.274861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.274894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.275093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.275119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.275268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.275293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.275462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.275488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.275662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.275687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.275861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.275892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.276035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.276061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.276229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.276254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.276399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.276424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.276615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.276640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.276798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.276823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.276996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.277022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.277201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.277226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.277395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.277420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.277566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.277593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.277757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.277782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.277951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.277977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.278169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.278194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.278336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.278361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.278554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.278579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.278748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.278773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.278951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.278978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.279142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.279167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.279311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.279338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.279530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.279555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.860 qpair failed and we were unable to recover it. 00:34:23.860 [2024-07-13 20:22:11.279704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.860 [2024-07-13 20:22:11.279730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.279896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.279922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.280093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.280118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.280279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.280304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.280468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.280493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.280682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.280707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.280851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.280883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.281051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.281076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.281244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.281270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.281433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.281458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.281626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.281651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.281819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.281844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.282047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.282073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.282244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.282273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.282406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.282431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.282593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.282618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.282756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.282781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.282922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.282949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.283088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.283113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.283286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.283311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.283501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.283526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.283693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.283719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.283915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.283941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.284079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.284104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.284293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.284318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.284451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.284476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.284616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.284641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.284816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.284842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.285062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.285102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.285276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.285303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.285503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.285529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.285723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.285749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.285930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.285956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.286120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.286146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.286314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.286340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.286510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.286535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.286735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.286761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.286904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.286931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.287122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.287148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.287290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.861 [2024-07-13 20:22:11.287316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.861 qpair failed and we were unable to recover it. 00:34:23.861 [2024-07-13 20:22:11.287488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.287515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.287688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.287713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.287881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.287908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.288080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.288106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.288275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.288300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.288467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.288492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.288659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.288684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.288861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.288898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.289044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.289071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.289265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.289290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.289455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.289481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.289655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.289680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.289848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.289880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.290050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.290079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.290250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.290275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.290440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.290465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.290629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.290655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.290827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.290852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.291000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.291026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.291200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.291225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.291372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.291397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.291563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.291588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.291783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.291809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.291991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.292017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.292176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.292202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.292345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.292370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.292533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.292559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.292708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.292734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.292902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.292928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.293093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.293118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.293285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.293311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.293503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.293528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.293721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.293746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.293891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.293917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.294057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.294082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.294255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.862 [2024-07-13 20:22:11.294280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.862 qpair failed and we were unable to recover it. 00:34:23.862 [2024-07-13 20:22:11.294448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.294473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.294637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.294664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.294836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.294861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.295064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.295089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.295262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.295288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.295430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.295456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.295653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.295678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.295844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.295876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.296073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.296099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.296243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.296268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.296469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.296495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.296640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.296667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.296832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.296857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.297032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.297058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.297197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.297222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.297369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.297394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.297561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.297586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.297746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.297771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.297935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.297974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.298149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.298176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.298340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.298366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.298535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.298561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.298734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.298760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.298913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.298940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.299129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.299156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.299353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.299378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.299529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.299554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.299692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.299720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.299892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.299918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.300087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.300112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.300250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.300275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.300446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.300472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.300606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.300632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.300796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.300821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.300966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.300993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.301165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.301190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.301335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.301360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.301504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.301529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.301677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.301702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.301854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.301899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.863 [2024-07-13 20:22:11.302055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.863 [2024-07-13 20:22:11.302082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.863 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.302279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.302305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.302475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.302500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.302670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.302696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.302873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.302906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.303082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.303108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.303299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.303325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.303490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.303516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.303661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.303688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.303862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.303893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.304064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.304089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.304236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.304261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.304428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.304453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.304643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.304668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.304813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.304838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.305038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.305063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.305230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.305255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.305421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.305447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.305648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.305673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.305842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.305873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.306043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.306069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.306259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.306284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.306423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.306448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.306613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.306639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.306811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.306836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.307017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.307043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.307210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.307235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.307407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.307432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.307572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.307597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.307764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.307790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.307927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.307954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.308096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.308121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.308265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.308290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.308507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.308532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.308665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.308690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.308823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.308848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.309031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.309057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.309147] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.864 [2024-07-13 20:22:11.309190] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.864 [2024-07-13 20:22:11.309219] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.864 [2024-07-13 20:22:11.309229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.309244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.864 [2024-07-13 20:22:11.309254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.309265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.864 [2024-07-13 20:22:11.309364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:34:23.864 [2024-07-13 20:22:11.309434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.864 [2024-07-13 20:22:11.309459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.864 qpair failed and we were unable to recover it. 00:34:23.864 [2024-07-13 20:22:11.309404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:34:23.865 [2024-07-13 20:22:11.309455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:34:23.865 [2024-07-13 20:22:11.309465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:34:23.865 [2024-07-13 20:22:11.309632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.309657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.309798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.309824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.309984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.310011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.310152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.310178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.310312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.310338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.310503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.310528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.310696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.310721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.310906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.310932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.311104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.311129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.311323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.311349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.311497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.311522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.311670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.311697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.311862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.311895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.312038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.312063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.312228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.312253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.312398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.312428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.312564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.312590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.312726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.312753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.312919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.312946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.313110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.313135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.313272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.313297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.313439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.313464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.313626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.313650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.313841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.313872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.314019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.314044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.314177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.314202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.314345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.314371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.314566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.314591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.314727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.314752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.314935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.314962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.315099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.315124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.315293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.315318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.315512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.315538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.315691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.315716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.315879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.315910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.316071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.316097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.316263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.316288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.316452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.316479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.316613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.316638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.865 [2024-07-13 20:22:11.316784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.865 [2024-07-13 20:22:11.316809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.865 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.316973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.316999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.317130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.317155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.317305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.317330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.317495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.317520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.317657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.317683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.317824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.317849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.318046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.318072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.318203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.318228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.318400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.318425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.318591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.318616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.318790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.318816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.318993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.319019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.319163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.319189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.319456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.319482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.319634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.319659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.319789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.319819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.319996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.320022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.320208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.320234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.320390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.320415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.320682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.320707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.320919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.320945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.321109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.321134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.321299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.321324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.321463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.321488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.321634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.321659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.321818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.321843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.322033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.322072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.322287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.322320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.322487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.322517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.322670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.322696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.322870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.322896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.323034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.323059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.323234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.323258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.323387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.323413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.866 [2024-07-13 20:22:11.323641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.866 [2024-07-13 20:22:11.323666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.866 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.323833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.323858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.324024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.324049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.324186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.324211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.324380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.324405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.324570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.324596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.324775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.324800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.324958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.324984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.325139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.325166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.325338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.325364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.325575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.325601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.325741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.325766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.325906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.325931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.326066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.326092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.326260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.326285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.326454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.326479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.326642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.326667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.326860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.326891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.327032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.327057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.327202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.327228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.327389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.327414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.327607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.327636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.327792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.327818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.327980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.328016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.328217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.328259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.328405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.328433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.328631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.328658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.328810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.328836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.329004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.329031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.329168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.329194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.329371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.329398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.329536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.329561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.329727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.329752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.329911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.329948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.330086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.330112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.330263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.330288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.330459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.330486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.330673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.330699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.330874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.330908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.331055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.331083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.331338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.331364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.867 [2024-07-13 20:22:11.331532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.867 [2024-07-13 20:22:11.331558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.867 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.331731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.331757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.331923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.331950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.332097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.332123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.332282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.332308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.332471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.332497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.332669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.332696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.332858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.332917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.333089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.333125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.333355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.333391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.333698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.333734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.333955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.333992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.334169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.334205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.334405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.334441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.334619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.334656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.334864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.334909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.335147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.335172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.335366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.335392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.335537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.335564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.335732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.335758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.335912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.335948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.336126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.336163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.336363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.336395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.336599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.336635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.336809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.336844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.337053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.337092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.337270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.337296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.337429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.337456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.337598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.337625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.337782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.337808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.337957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.337985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.338171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.338198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.338346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.338372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.338512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.338538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.338711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.338737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.338910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.338935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.339073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.339099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.339288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.339313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.339469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.339496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.339645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.339672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.339814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.339840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.868 qpair failed and we were unable to recover it. 00:34:23.868 [2024-07-13 20:22:11.340033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.868 [2024-07-13 20:22:11.340061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.340238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.340264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.340401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.340427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.340574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.340600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.340759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.340785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.340945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.340972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.341128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.341155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.341308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.341335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.341509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.341535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.341701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.341726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.341861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.341896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.342057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.342083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.342270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.342296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.342464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.342490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.342656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.342682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.342817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.342844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.342996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.343023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.343162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.343187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.343356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.343383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.343553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.343584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.343753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.343779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.343928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.343954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.344124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.344150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.344306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.344331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.344503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.344530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.344691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.344717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.344877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.344904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.345071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.345096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.345274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.345299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.345443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.345468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.345638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.345665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.345806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.345832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.346005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.346032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.346183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.346209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.346374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.346400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.346553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.346579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.346749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.346777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.346939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.346965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.347140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.347166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.347298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.347323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.347486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.869 [2024-07-13 20:22:11.347511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.869 qpair failed and we were unable to recover it. 00:34:23.869 [2024-07-13 20:22:11.347669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.347694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.347842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.347872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.348037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.348062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.348218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.348244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.348381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.348408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.348578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.348604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.348750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.348776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.348929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.348955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.349138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.349164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.349315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.349341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.349490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.349515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.349678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.349703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.349872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.349899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.350091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.350116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.350259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.350285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.350427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.350452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.350593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.350620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.350754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.350780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.350945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.350977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.351140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.351166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.351368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.351395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.351557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.351582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.351743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.351769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.351905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.351932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.352119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.352145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.352314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.352341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.352479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.352503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.352641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.352667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.352845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.352877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.353029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.353054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.353210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.353236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.353399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.353424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.353578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.353605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.353776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.353802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.353959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.870 [2024-07-13 20:22:11.353987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.870 qpair failed and we were unable to recover it. 00:34:23.870 [2024-07-13 20:22:11.354157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.354183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.354325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.354352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.354513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.354539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.354686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.354712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.354882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.354908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.355072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.355099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.355271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.355299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.355441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.355468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.355657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.355683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.355954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.355982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.356176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.356207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.356383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.356410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.356549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.356574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.356745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.356771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.356936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.356964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.357123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.357150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.357322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.357348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.357534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.357560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.357722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.357748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.357916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.357943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.358107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.358133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.358322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.358347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.358515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.358540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.358769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.358795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.358952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.358979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.359150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.359176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.359313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.359339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.359482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.359507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.359759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.359785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.359944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.359971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.360115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.360142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.360293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.360319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.360479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.360505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.360648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.360674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.360840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.360871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.361065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.361091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.361234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.361260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.361404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.361430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.361627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.361652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.361810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.361835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.871 [2024-07-13 20:22:11.362018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.871 [2024-07-13 20:22:11.362045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.871 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.362239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.362265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.362397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.362423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.362589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.362614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.362783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.362809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.362992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.363019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.363215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.363241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.363400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.363427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.363567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.363593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.363745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.363772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.363916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.363948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.364122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.364148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.364287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.364313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.364470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.364497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.364666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.364691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.364864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.364895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.365076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.365102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.365360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.365385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.365540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.365566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.365716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.365744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.365918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.365946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.366083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.366109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.366282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.366307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.366458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.366484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.366634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.366660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.366802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.366828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.366967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.366993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.367174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.367200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.367364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.367390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.367547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.367572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.367736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.367761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.367914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.367940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.368092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.368117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.368276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.368301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.368484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.368509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.368675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.368700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.368910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.368936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.369092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.369118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.369285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.369311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.369478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.369504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.369651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.369676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.872 [2024-07-13 20:22:11.369889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.872 [2024-07-13 20:22:11.369916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.872 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.370109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.370134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.370302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.370329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.370500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.370525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.370692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.370717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.370851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.370883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.371140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.371166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.371341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.371368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.371559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.371584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.371741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.371771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.371940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.371967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.372100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.372126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.372288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.372313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.372461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.372486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.372656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.372682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.372858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.372889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.373150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.373176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.373367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.373393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.373562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.373588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.373741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.373769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.373972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.373999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.374157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.374183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.374324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.374351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.374559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.374585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.374755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.374780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.374925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.374951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.375102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.375127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.375300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.375325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.375494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.375520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.375696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.375721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.375892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.375919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.376099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.376124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.376288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.376313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.376510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.376536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.376705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.376733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.376895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.376921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.377092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.377118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.377279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.377304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.377442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.377469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.377609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.377635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.377774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.377800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.873 qpair failed and we were unable to recover it. 00:34:23.873 [2024-07-13 20:22:11.377969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.873 [2024-07-13 20:22:11.377995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.378136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.378161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.378296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.378322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.378456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.378482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.378626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.378652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.378814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.378840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.379017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.379044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.379204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.379230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.379400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.379430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.379600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.379626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.379785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.379810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.379997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.380023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.380182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.380208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.380373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.380399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.380553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.380579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.380710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.380736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.380908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.380934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.381097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.381122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.381314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.381340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.381494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.381520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.381682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.381709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.381900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.381927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.382099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.382125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.382315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.382341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.382512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.382538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.382696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.382722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.382916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.382943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.383128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.383155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.383321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.383347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.383499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.383525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.383671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.383696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.383856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.383888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.384047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.384073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.384210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.384237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.384429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.384455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.384607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.384633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.874 qpair failed and we were unable to recover it. 00:34:23.874 [2024-07-13 20:22:11.384802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.874 [2024-07-13 20:22:11.384829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.384967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.384994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.385172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.385197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.385359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.385385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.385560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.385585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.385761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.385786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.385929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.385955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.386125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.386151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.386344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.386369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.386528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.386553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.386720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.386747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.386928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.386955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.387120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.387153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.387299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.387324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.387499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.387524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.387658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.387684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.387844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.387875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.388039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.388064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.388226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.388252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.388394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.388420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.388599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.388626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.388799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.388825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.389015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.389041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.389222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.389248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.389416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.389442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.389571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.389596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.389781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.389807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.390005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.390031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.390173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.390199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.390350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.390376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.390516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.390541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.390694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.390720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.390858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.390889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.391037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.391063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.391204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.391230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.391397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.391422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.391585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.391611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.391756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.391782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.391954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.391982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.392133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.392161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.392333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.875 [2024-07-13 20:22:11.392359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.875 qpair failed and we were unable to recover it. 00:34:23.875 [2024-07-13 20:22:11.392512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.392538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.392678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.392704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.392963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.392990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.393161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.393187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.393329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.393356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.393514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.393540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.393706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.393731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.393904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.393931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.394069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.394095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.394265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.394293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.394438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.394464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.394714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.394744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.394914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.394941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.395091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.395117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.395310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.395336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.395504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.395530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.395700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.395726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.395860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.395892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.396089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.396115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.396261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.396288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.396469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.396495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.396664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.396690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.396851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.396884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.397030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.397061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.397216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.397242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.397444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.397470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.397619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.397645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.397803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.397829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.398005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.398032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.398214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.398240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.398492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.398518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.398667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.398694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.398870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.398896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.399094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.399120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.399371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.399397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.399570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.399597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.399742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.399768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.399923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.399951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.400120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.400146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.400322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.400348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.876 qpair failed and we were unable to recover it. 00:34:23.876 [2024-07-13 20:22:11.400512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.876 [2024-07-13 20:22:11.400538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.400715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.400741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.400918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.400944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.401082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.401109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.401306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.401332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.401501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.401527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.401685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.401711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.401877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.401903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.402083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.402109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.402246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.402271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.402439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.402464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.402621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.402651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.402828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.402854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.403030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.403057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.403244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.403270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.403411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.403437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.403723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.403749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.403915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.403942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.404086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.404112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.404266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.404292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.404489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.404514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.404670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.404695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.404832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.404858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.405059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.405085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.405229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.405255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.405517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.405544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.405691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.405718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.405898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.405925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.406078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.406104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.406269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.406295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.406434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.406461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.406626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.406652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.406785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.406810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.407009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.407035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.407200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.407226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.407404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.407429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.407572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.407598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.407757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.407783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.407959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.407986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.408129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.408154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.408298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.877 [2024-07-13 20:22:11.408324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.877 qpair failed and we were unable to recover it. 00:34:23.877 [2024-07-13 20:22:11.408494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.408519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.408660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.408685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.408942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.408968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.409130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.409155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.409284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.409309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.409451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.409476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.409625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.409650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.409842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.409885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.410089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.410114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.410285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.410311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.410478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.410508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.410705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.410730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.410898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.410924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.411095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.411120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.411292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.411318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.411494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.411519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.411712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.411738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.411941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.411968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.412134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.412159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.412355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.412380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.412537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.412562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.412730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.412756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.413007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.413033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.413163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.413189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.413335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.413362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.413556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.413582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.413774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.413800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.413944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.413971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.414134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.414159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.414326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.414353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.414500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.414526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.414698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.414724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.414872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.414900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.415072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.415098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.415228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.415254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.415424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.415450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.415623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.415650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.415814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.415855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.416065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.416097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.416289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.416319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.878 qpair failed and we were unable to recover it. 00:34:23.878 [2024-07-13 20:22:11.416476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.878 [2024-07-13 20:22:11.416504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.416691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.416717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.416889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.416916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.417054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.417080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.417254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.417279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.417411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.417436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.417580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.417606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.417801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.417826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.417977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.418004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.418143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.418169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.418306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.418336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.418476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.418503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.418638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.418664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.418859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.418892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.419027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.419053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.419221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.419247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.419399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.419425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.419593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.419620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.419754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.419780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.419922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.419949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.420086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.420113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.420309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.420335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.420476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.420507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.420677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.420703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.420870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.420897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.421038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.421064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.421232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.421258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.421402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.421428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.421620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.421645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.421807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.421833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.422028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.422055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.422199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.422224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.422397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.422423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.422590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.422616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.422755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.422780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.879 qpair failed and we were unable to recover it. 00:34:23.879 [2024-07-13 20:22:11.422928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.879 [2024-07-13 20:22:11.422954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.423108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.423134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.423308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.423334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.423518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.423543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.423682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.423707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.423877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.423904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.424046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.424073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.424210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.424237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.424384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.424410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.424544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.424570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.424750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.424775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.424942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.424969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.425111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.425136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.425286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.425312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.425480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.425506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.425674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.425708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.425888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.425915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.426086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.426113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.426253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.426279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.426443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.426469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.426608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.426633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.426770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.426796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.426965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.426991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.427240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.427266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.427436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.427462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.427616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.427641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.427890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.427917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.428061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.428089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.428259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.428285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.428451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.428477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.428612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.428639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.428806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.428832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.429006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.429032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.429195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.429221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.429468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.429494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.429639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.429666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.429800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.429826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.430025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.430052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.430218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.430245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.430386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.430412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.430581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.430607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.880 qpair failed and we were unable to recover it. 00:34:23.880 [2024-07-13 20:22:11.430750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.880 [2024-07-13 20:22:11.430777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.430981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.431022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.431167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.431195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.431365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.431391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.431533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.431558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.431752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.431777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.431921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.431948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.432112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.432138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.432294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.432320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.432468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.432494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.432643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.432670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.432805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.432831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.433012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.433038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.433183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.433209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.433401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.433431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.433591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.433616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.433748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.433773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.434023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.434049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.434224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.434250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.434388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.434414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.434581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.434608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.434774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.434799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.434972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.434999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.435146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.435173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.435333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.435358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.435527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.435553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.435722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.435747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.435994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.436020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.436193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.436219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.436361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.436387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.436553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.436578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.436752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.436781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.436949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.436976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.437110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.437135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.437332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.437357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.437525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.437551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.437693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.437719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.437891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.437918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.438117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.438142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.438291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.438316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.881 [2024-07-13 20:22:11.438497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.881 [2024-07-13 20:22:11.438522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.881 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.438675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.438710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.438902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.438933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.439134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.439174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.439371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.439398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.439572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.439599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.439743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.439769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.439933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.439959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.440098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.440124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.440294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.440320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.440470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.440496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.440665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.440690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.440861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.440892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.441031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.441057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.441207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.441238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.441390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.441416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.441561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.441589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.441758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.441783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.441921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.441948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.442094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.442119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.442258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.442283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.442441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.442466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.442612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.442639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.442801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.442826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.442968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.442994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.443176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.443201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.443367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.443394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.443548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.443573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.443720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.443748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.443920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.443947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.444091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.444117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.444259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.444287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.444428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.444454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.444702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.444728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.444897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.444924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.445098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.445124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.445274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.445300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.445448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.445474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.445625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.445651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.445786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.445813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.445985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.446014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.882 [2024-07-13 20:22:11.446163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.882 [2024-07-13 20:22:11.446189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.882 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.446327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.446352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.446503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.446529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.446694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.446719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.446861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.446893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.447055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.447080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.447221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.447247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.447382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.447409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.447579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.447607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.447746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.447773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.447929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.447957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.448124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.448150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.448338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.448363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.448513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.448545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.448686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.448712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.448853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.448885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.449038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.449064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.449204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.449231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.449393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.449418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.449560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.449586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.449722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.449748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.449917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.449944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.450082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.450108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.450271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.450297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.450475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.450500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.450664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.450690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.450840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.450873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.451018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.451045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.451191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.451218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.451383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.451411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.451556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.451582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.451712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.451737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.451901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.451928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.452088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.452114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.452277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.452302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.452441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.452466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.883 qpair failed and we were unable to recover it. 00:34:23.883 [2024-07-13 20:22:11.452599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.883 [2024-07-13 20:22:11.452623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.452791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.452816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.452978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.453004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.453148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.453173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.453323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.453356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.453524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.453554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.453749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.453791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.453936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.453965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.454104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.454130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.454287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.454312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.454501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.454526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.454686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.454711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.454879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.454905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.455055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.455080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.455232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.455257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.455387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.455412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.455561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.455586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.455732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.455757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.455906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.455933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.456077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.456102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.456275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.456300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.456438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.456462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.456596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.456621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.456758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.456783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.456918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.456944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.457080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.457104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.457244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.457269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.457432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.457456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.457631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.457657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.457793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.457818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.457991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.458018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.458182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.458212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.458347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.458371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.458550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.458575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.458719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.458744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.458881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.458907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.459047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.459072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.459201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.459226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.459358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.459383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.459514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.459538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.459684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.459709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.884 [2024-07-13 20:22:11.459864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.884 [2024-07-13 20:22:11.459909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.884 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.460171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.460199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.460333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.460359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.460499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.460526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.460673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.460699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.460863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.460895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.461145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.461171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.461362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.461388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.461567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.461593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.461744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.461772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.461943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.461970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.462129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.462154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.462347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.462372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.462513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.462538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.462683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.462708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.462853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.462889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.463048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.463074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.463322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.463353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.463518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.463544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.463685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.463711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.463869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.463896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.464065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.464091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.467042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.467082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.467237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.467265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.467414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.467442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.467590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.467617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.467781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.467807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.467999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.468026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.468164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.468190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.468362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.468388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.468568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.468594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.468760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.468786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.468975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.469001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.469151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.469177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.469350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.469375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.469516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.469543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.469726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.469751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.469896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.469922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.470103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.470129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.470308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.470333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.470490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.470516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.885 [2024-07-13 20:22:11.470701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.885 [2024-07-13 20:22:11.470727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.885 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.470901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.470927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.471065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.471091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.471247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.471273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.471441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.471468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.471629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.471655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.471822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.471848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.472023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.472049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.472221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.472247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.472419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.472445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.472642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.472669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.472812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.472837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.472984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.473010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.473147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.473173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.473320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.473345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.473512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.473538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.473698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.473729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.473860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.473894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.474024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.474050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.474214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.474241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.474394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.474419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.474556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.474581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.474771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.474797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.474939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.474966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.475132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.475157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.475325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.475351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.475526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.475551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.475698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.475723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.475888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.475915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.476054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.476081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.476260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.476286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.476448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.476473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.476602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.476628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.476826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.476851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.477001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.477028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.477161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.477186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:23.886 [2024-07-13 20:22:11.477334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:23.886 [2024-07-13 20:22:11.477359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:23.886 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.477495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.477523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.477661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.477686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.477882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.477909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.478081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.478108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.478265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.478291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.478453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.478479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.478635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.155 [2024-07-13 20:22:11.478661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.155 qpair failed and we were unable to recover it. 00:34:24.155 [2024-07-13 20:22:11.478828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.478854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.478995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.479021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.479187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.479213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.479379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.479405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.479562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.479601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.479760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.479787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.479929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.479956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.480126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.480152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.480320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.480346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.480505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.480531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.480697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.480722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.480895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.480922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.481060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.481091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.481231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.481258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:24.156 [2024-07-13 20:22:11.481432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.481458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:34:24.156 [2024-07-13 20:22:11.481592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.481619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.481787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.481813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:24.156 [2024-07-13 20:22:11.481988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.482016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.482158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.482184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:24.156 [2024-07-13 20:22:11.482351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.482378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.156 [2024-07-13 20:22:11.482543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.482570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.482708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.482734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.482897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.482924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.483095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.483121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.483291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.483316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.483461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.483487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.483652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.483677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.483816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.483842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.484030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.484070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.484335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.484362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.484515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.484542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.484680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.484708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.484877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.484905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.485052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.485079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.485247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.485274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.485438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.485464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.485603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.485630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.485772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.485800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.485952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.156 [2024-07-13 20:22:11.485979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.156 qpair failed and we were unable to recover it. 00:34:24.156 [2024-07-13 20:22:11.486123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.486149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.486304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.486329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.486472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.486498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.486658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.486687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.486848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.486879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.487059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.487084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.487235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.487262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.487427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.487453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.487620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.487646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.487800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.487826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.487999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.488025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.488162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.488194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.488333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.488360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.488514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.488539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.488704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.488730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.488883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.488920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.489060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.489085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.489222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.489248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.489405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.489430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.489577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.489604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.489746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.489771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.489934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.489961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.490108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.490134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.490274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.490300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.490459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.490485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.490633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.490660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.490828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.490854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.491002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.491029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.491170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.491197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.491331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.491356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.491487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.491513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.491679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.491704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.491851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.491888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.492024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.492049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.492185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.492210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.492357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.492383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.492549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.492576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.492710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.492736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.492908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.492935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.493080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.493107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.157 [2024-07-13 20:22:11.493266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.157 [2024-07-13 20:22:11.493291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.157 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.493448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.493474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.493611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.493642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.493804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.493829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.493978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.494006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.494181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.494212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.494365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.494390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.494566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.494592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.494722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.494748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.494884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.494911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.495046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.495072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.495261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.495292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.495457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.495483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.495619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.495644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.495816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.495841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.496022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.496049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.496195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.496222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.496381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.496407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.496554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.496580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.496716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.496741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.496899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.496936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.497069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.497096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.497247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.497274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.497445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.497471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.497614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.497641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.497793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.497820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.497997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.498024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.498155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.498181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.498315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.498340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.498481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.498507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.498645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.498672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.498809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.498835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.499007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.499034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.499172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.499198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.499342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.499368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.499512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.499537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.499701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.499726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.499879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.499923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.500113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.500154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.500329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.500367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.500518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.500547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.158 qpair failed and we were unable to recover it. 00:34:24.158 [2024-07-13 20:22:11.500715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.158 [2024-07-13 20:22:11.500742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.500899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.500927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.501066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.501092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.501260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.501287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.501433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.501460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.501723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.501750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.501892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.501923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.502078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.502104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.502262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.502288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.502430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.502456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.502604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.502636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.502782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.502809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.502946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.502973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.503139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.503165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.503304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.503330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.503480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.503506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.503758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.503785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.503932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.503960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.504133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.504160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.504407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.504433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.504598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.504624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.504767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.504793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.504942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.504969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.505133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.505159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.505328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.505355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.505490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.505516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.505654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.505680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.505847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.505881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.506027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.506052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.506191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.506217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.506410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.506436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:24.159 [2024-07-13 20:22:11.506571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.506598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.506750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.506776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:24.159 [2024-07-13 20:22:11.506953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.506980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.159 [2024-07-13 20:22:11.507124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.507151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.159 [2024-07-13 20:22:11.507290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.507322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.507465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.507491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.507675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.159 [2024-07-13 20:22:11.507700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.159 qpair failed and we were unable to recover it. 00:34:24.159 [2024-07-13 20:22:11.507840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.507871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.508055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.508081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.508240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.508267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.508411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.508437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.508635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.508660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.508799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.508824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.508998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.509024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.509188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.509214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.509383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.509408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.509567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.509592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.509777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.509802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.509958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.509985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.510148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.510173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.510340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.510364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.510526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.510551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.510685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.510710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.510882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.510907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.511045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.511070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.511206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.511232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.511365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.511389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.511523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.511548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.511713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.511738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.511892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.511918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.512082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.512107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.512249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.512274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.512415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.512441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.512575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.512600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.512738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.512764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.512902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.512935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.513104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.513130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.513266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.513291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.513430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.513455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.513647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.513673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.513811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.513836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.160 qpair failed and we were unable to recover it. 00:34:24.160 [2024-07-13 20:22:11.513997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.160 [2024-07-13 20:22:11.514024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.514154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.514179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.514316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.514342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.514473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.514504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.514651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.514676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.514837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.514863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.515047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.515073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.515246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.515272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.515525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.515551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.515695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.515721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.515857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.515890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.516057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.516082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.516346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.516372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.516500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.516526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.516669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.516696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.516870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.516896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.517050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.517076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.517261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.517287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.517431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.517457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.517593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.517618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.517754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.517781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.517961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.517987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.518129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.518156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.518336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.518362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.518546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.518571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.518739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.518766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.518936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.518962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.519124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.519150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.519315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.519340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.519497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.519522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.519675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.519700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.519843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.519874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.520022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.520048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.520229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.520255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.520395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.520421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.520586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.520611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.520779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.520804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.520948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.520974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.521129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.521154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.521321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.521347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.521487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.161 [2024-07-13 20:22:11.521512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.161 qpair failed and we were unable to recover it. 00:34:24.161 [2024-07-13 20:22:11.521649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.521676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.521875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.521901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.522044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.522074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.522205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.522231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.522394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.522420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.522554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.522578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.522741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.522766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.522924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.522950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.523107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.523134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.523299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.523325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.523497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.523523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.523689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.523714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.523887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.523914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.524051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.524077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.524235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.524260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.524422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.524448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.524611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.524636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.524779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.524804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.524943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.524969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.525134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.525159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.525414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.525439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.525592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.525617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.525767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.525793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.525932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.525958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.526129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.526154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.526329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.526354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.526500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.526526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.526663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.526688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.526819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.526845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.527033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.527059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.527205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.527231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.527401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.527426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.527602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.527627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.527784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.527810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.527961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.527988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.528132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.528157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.528325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.528350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.528482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.528508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.528673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.528698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.528863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.528895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.529038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.162 [2024-07-13 20:22:11.529064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.162 qpair failed and we were unable to recover it. 00:34:24.162 [2024-07-13 20:22:11.529201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.529227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.529391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.529420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.529682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.529707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.529890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.529916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.530117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.530143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.530282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.530307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.530453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.530478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.530614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.530639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.530779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.530804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.530947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.530973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.531148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.531173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.531319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.531344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.531511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.531536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.531783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.531808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.531982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.532017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.532203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.532228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.532367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.532392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.532560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.532586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.532757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.532783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.532928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.532955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.533105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.533132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.533295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.533321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.533456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.533482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.533627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.533653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.533795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.533820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.533971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.533998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.534139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.534165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.534351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.534376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.534522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.534548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.534719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.534745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.534899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.534933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.535120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.535145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.535314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.535339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 Malloc0 00:34:24.163 [2024-07-13 20:22:11.535532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.535558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.535712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.535737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.163 [2024-07-13 20:22:11.535905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.535931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:24.163 [2024-07-13 20:22:11.536079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.536105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.536235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.163 [2024-07-13 20:22:11.536261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.163 [2024-07-13 20:22:11.536391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.163 [2024-07-13 20:22:11.536417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.163 qpair failed and we were unable to recover it. 00:34:24.163 [2024-07-13 20:22:11.536547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.536572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.536746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.536772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.536949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.536976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.537121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.537147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.537274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.537301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.537493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.537518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.537679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.537705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.537876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.537902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.538046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.538071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.538241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.538267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.538409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.538434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.538588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.538615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.538753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.538780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.538951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.538977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.539121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:24.164 [2024-07-13 20:22:11.539186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.539227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.539407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.539436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.539581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.539608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.539763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.539789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.539960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.539987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.540140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.540167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.540340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.540367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.540504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.540531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.540785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.540812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.541013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.541040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.541185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.541211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.541383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.541409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.541555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.541581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.541744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.541784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.541969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.542003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f492c000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.542190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.542220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.542355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.542381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.542518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.542544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.542682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.542709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.542848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.542881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.543052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.543079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.164 qpair failed and we were unable to recover it. 00:34:24.164 [2024-07-13 20:22:11.543247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.164 [2024-07-13 20:22:11.543273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.543434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.543459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.543624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.543649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.543801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.543826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.543968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.543994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.544129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.544159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.544298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.544324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.544487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.544513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.544651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.544678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.544842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.544892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.545044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.545071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.545220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.545246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.545389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.545417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.545579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.545605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.545770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.545796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.545967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.545994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.546153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.546179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.546319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.546346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.546486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.546513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4934000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.546719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.546759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.546915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.546944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.547093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.547121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.547282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.547310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.165 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:24.165 [2024-07-13 20:22:11.547484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.547511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.165 [2024-07-13 20:22:11.547657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.547684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.165 [2024-07-13 20:22:11.547850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.547884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.548030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.548057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.548239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.548266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.548398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.548425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.548674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.548701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.548837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.548877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4924000b90 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.549040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.549077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.549243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.549271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.549415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.549441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.549577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.549603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.549735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.549760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.549932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.549959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.550098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.550124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.550318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.165 [2024-07-13 20:22:11.550343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.165 qpair failed and we were unable to recover it. 00:34:24.165 [2024-07-13 20:22:11.550487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.550512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.550651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.550676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.550811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.550836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.550993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.551021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.551154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.551180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.551356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.551382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.551540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.551566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.551699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.551724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.551888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.551914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.552052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.552077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.552229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.552255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.552424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.552449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.552619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.552644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.552781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.552806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.552949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.552974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.553120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.553145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.553276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.553302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.553429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.553454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.553628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.553657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.553817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.553841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.553982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.554008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.554150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.554176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.554350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.554376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.554538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.554563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.554696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.554721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.554871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.554897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.555093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.555122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.555264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.555290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.166 [2024-07-13 20:22:11.555480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:24.166 [2024-07-13 20:22:11.555506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.166 [2024-07-13 20:22:11.555636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.555662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.166 [2024-07-13 20:22:11.555821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.555851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.556018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.556043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.556210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.556235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.556411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.556436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.556604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.556630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.556780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.556805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.556955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.556981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.557131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.557157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.557318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.557343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.166 [2024-07-13 20:22:11.557481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.166 [2024-07-13 20:22:11.557506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.166 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.557638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.557663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.557803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.557828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.557970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.557995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.558155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.558180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.558350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.558375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.558556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.558581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.558720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.558745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.558876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.558902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.559053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.559078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.559217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.559241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.559400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.559425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.559562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.559587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.559752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.559777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.559913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.559939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.560086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.560111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.560242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.560267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.560401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.560426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.560593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.560622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.560784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.560809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.560951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.560977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.561143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.561167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.561326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.561351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.561503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.561528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.561695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.561720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.561847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.561878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.562020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.562045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.562181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.562205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.562372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.562396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.562575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.562599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.562763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.562788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.562940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.562965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.563104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.563129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.563278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.563303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.167 [2024-07-13 20:22:11.563453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.563478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.167 [2024-07-13 20:22:11.563617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.563642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.167 [2024-07-13 20:22:11.563791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.563816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.564000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.564026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.564165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.564190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.564324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.564349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.167 qpair failed and we were unable to recover it. 00:34:24.167 [2024-07-13 20:22:11.564488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.167 [2024-07-13 20:22:11.564514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.564672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.564697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.564828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.564854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.564998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.565023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.565179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.565204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.565333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.565358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.565522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.565547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.565711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.565736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.565876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.565902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.566055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.566081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.566211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.566236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.566365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.566390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.566560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.566585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.566711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.566736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.566886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.566912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.567078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.567103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.567255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:24.168 [2024-07-13 20:22:11.567281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165d570 with addr=10.0.0.2, port=4420 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.567368] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.168 [2024-07-13 20:22:11.569896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.168 [2024-07-13 20:22:11.570061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.168 [2024-07-13 20:22:11.570089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.168 [2024-07-13 20:22:11.570105] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.168 [2024-07-13 20:22:11.570118] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.168 [2024-07-13 20:22:11.570153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.168 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:24.168 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:24.168 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:24.168 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:24.168 20:22:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3352951 00:34:24.168 [2024-07-13 20:22:11.579767] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.168 [2024-07-13 20:22:11.579929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.168 [2024-07-13 20:22:11.579956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.168 [2024-07-13 20:22:11.579971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.168 [2024-07-13 20:22:11.579985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.168 [2024-07-13 20:22:11.580015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.589801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.168 [2024-07-13 20:22:11.589959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.168 [2024-07-13 20:22:11.589986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.168 [2024-07-13 20:22:11.590005] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.168 [2024-07-13 20:22:11.590019] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.168 [2024-07-13 20:22:11.590047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.599773] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.168 [2024-07-13 20:22:11.599959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.168 [2024-07-13 20:22:11.599990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.168 [2024-07-13 20:22:11.600007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.168 [2024-07-13 20:22:11.600028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.168 [2024-07-13 20:22:11.600059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.609772] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.168 [2024-07-13 20:22:11.609924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.168 [2024-07-13 20:22:11.609951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.168 [2024-07-13 20:22:11.609966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.168 [2024-07-13 20:22:11.609979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.168 [2024-07-13 20:22:11.610007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.168 qpair failed and we were unable to recover it. 00:34:24.168 [2024-07-13 20:22:11.619768] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.168 [2024-07-13 20:22:11.619929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.168 [2024-07-13 20:22:11.619957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.168 [2024-07-13 20:22:11.619972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.168 [2024-07-13 20:22:11.619985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.620014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.629804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.629949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.629976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.629990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.630004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.630033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.639788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.639942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.639968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.639983] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.639996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.640025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.649888] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.650035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.650060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.650075] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.650088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.650117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.659821] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.659970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.659997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.660011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.660025] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.660053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.669854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.670008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.670034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.670048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.670062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.670090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.679928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.680074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.680100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.680114] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.680127] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.680155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.689977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.690115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.690141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.690161] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.690176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.690204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.700001] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.700142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.700167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.700182] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.700195] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.700223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.710120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.710279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.710305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.710319] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.710332] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.710360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.720029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.720223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.720248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.720263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.720276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.720304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.730056] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.730218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.730243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.730258] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.730271] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.730299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.740110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.740296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.740321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.740336] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.740349] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.740377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.750139] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.750277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.750302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.750316] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.750329] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.750357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.760161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.760341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.169 [2024-07-13 20:22:11.760367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.169 [2024-07-13 20:22:11.760381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.169 [2024-07-13 20:22:11.760395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.169 [2024-07-13 20:22:11.760423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.169 qpair failed and we were unable to recover it. 00:34:24.169 [2024-07-13 20:22:11.770176] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.169 [2024-07-13 20:22:11.770362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.170 [2024-07-13 20:22:11.770388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.170 [2024-07-13 20:22:11.770402] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.170 [2024-07-13 20:22:11.770415] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.170 [2024-07-13 20:22:11.770443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.170 qpair failed and we were unable to recover it. 00:34:24.170 [2024-07-13 20:22:11.780247] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.170 [2024-07-13 20:22:11.780427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.170 [2024-07-13 20:22:11.780452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.170 [2024-07-13 20:22:11.780473] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.170 [2024-07-13 20:22:11.780487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.170 [2024-07-13 20:22:11.780516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.170 qpair failed and we were unable to recover it. 00:34:24.170 [2024-07-13 20:22:11.790346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.170 [2024-07-13 20:22:11.790491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.170 [2024-07-13 20:22:11.790516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.170 [2024-07-13 20:22:11.790531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.170 [2024-07-13 20:22:11.790543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.170 [2024-07-13 20:22:11.790571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.170 qpair failed and we were unable to recover it. 00:34:24.170 [2024-07-13 20:22:11.800243] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.170 [2024-07-13 20:22:11.800398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.170 [2024-07-13 20:22:11.800431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.170 [2024-07-13 20:22:11.800461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.170 [2024-07-13 20:22:11.800480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.170 [2024-07-13 20:22:11.800509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.170 qpair failed and we were unable to recover it. 00:34:24.430 [2024-07-13 20:22:11.810319] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.430 [2024-07-13 20:22:11.810467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.430 [2024-07-13 20:22:11.810493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.430 [2024-07-13 20:22:11.810508] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.430 [2024-07-13 20:22:11.810521] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.430 [2024-07-13 20:22:11.810549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.430 qpair failed and we were unable to recover it. 00:34:24.430 [2024-07-13 20:22:11.820343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.430 [2024-07-13 20:22:11.820489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.430 [2024-07-13 20:22:11.820515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.430 [2024-07-13 20:22:11.820530] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.430 [2024-07-13 20:22:11.820543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.430 [2024-07-13 20:22:11.820571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.430 qpair failed and we were unable to recover it. 00:34:24.430 [2024-07-13 20:22:11.830340] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.430 [2024-07-13 20:22:11.830483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.430 [2024-07-13 20:22:11.830509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.430 [2024-07-13 20:22:11.830524] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.430 [2024-07-13 20:22:11.830537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.430 [2024-07-13 20:22:11.830566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.430 qpair failed and we were unable to recover it. 00:34:24.430 [2024-07-13 20:22:11.840384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.430 [2024-07-13 20:22:11.840562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.430 [2024-07-13 20:22:11.840588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.430 [2024-07-13 20:22:11.840602] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.430 [2024-07-13 20:22:11.840616] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.430 [2024-07-13 20:22:11.840644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.430 qpair failed and we were unable to recover it. 00:34:24.430 [2024-07-13 20:22:11.850455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.430 [2024-07-13 20:22:11.850600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.430 [2024-07-13 20:22:11.850625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.430 [2024-07-13 20:22:11.850639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.430 [2024-07-13 20:22:11.850654] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.850682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.860512] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.860650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.860675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.860689] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.860702] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.860730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.870477] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.870619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.870644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.870665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.870679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.870709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.880509] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.880657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.880683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.880697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.880710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.880738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.890468] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.890605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.890630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.890644] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.890657] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.890685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.900524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.900665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.900691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.900706] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.900719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.900747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.910536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.910673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.910698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.910712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.910725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.910753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.920570] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.920716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.920742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.920756] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.920769] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.920799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.930651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.930830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.930856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.930879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.930894] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.930922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.940650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.940838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.940873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.940891] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.940905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.940934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.950634] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.950775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.950800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.950815] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.950828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.950855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.960681] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.960829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.960860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.960883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.960897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.960925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.970694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.970844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.970876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.970892] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.970905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.970933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.980729] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.980878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.980904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.980931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.980946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.980976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:11.990742] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:11.990897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.431 [2024-07-13 20:22:11.990931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.431 [2024-07-13 20:22:11.990946] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.431 [2024-07-13 20:22:11.990959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.431 [2024-07-13 20:22:11.990987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.431 qpair failed and we were unable to recover it. 00:34:24.431 [2024-07-13 20:22:12.000791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.431 [2024-07-13 20:22:12.000952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.000978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.000993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.001006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.001040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.010807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.010973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.010999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.011014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.011027] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.011055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.020860] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.021012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.021042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.021058] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.021071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.021100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.030855] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.031048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.031073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.031088] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.031101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.031130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.040929] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.041107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.041133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.041147] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.041161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.041189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.050928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.051077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.051111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.051129] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.051142] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.051173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.060984] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.061134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.061160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.061174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.061187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.061216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.070966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.071106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.071132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.071146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.071159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.071196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.432 [2024-07-13 20:22:12.081026] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.432 [2024-07-13 20:22:12.081189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.432 [2024-07-13 20:22:12.081213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.432 [2024-07-13 20:22:12.081228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.432 [2024-07-13 20:22:12.081240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.432 [2024-07-13 20:22:12.081267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.432 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.091041] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.091208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.091237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.091252] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.091265] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.091301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.101128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.101294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.101320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.101334] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.101347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.101375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.111108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.111270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.111295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.111309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.111322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.111352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.121137] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.121285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.121309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.121323] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.121335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.121362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.131138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.131288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.131313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.131328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.131340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.131368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.141181] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.141316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.141347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.141362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.141375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.141403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.151223] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.151372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.151397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.151411] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.151424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.151452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.161242] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.161395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.161420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.161434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.161448] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.161476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.171271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.171416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.171442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.171456] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.171469] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.171497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.181271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.181415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.181440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.181455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.181468] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.181502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.191302] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.191439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.191465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.191479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.191492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.191522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.201384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.201528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.201553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.201567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.201580] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.201608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.211374] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.211513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.211538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.211552] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.692 [2024-07-13 20:22:12.211565] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.692 [2024-07-13 20:22:12.211593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.692 qpair failed and we were unable to recover it. 00:34:24.692 [2024-07-13 20:22:12.221394] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.692 [2024-07-13 20:22:12.221536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.692 [2024-07-13 20:22:12.221562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.692 [2024-07-13 20:22:12.221576] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.221589] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.221617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.231462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.231612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.231645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.231663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.231676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.231706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.241473] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.241658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.241683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.241697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.241710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.241738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.251519] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.251715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.251743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.251762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.251775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.251806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.261533] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.261690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.261716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.261730] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.261744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.261772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.271623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.271780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.271806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.271820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.271838] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.271874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.281584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.281727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.281752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.281766] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.281779] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.281807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.291582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.291722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.291748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.291762] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.291775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.291803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.301622] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.301776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.301801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.301816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.301829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.301857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.311735] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.311878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.311903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.311917] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.311930] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.311960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.321919] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.322085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.322111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.322125] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.322138] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.322166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.331758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.331903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.331929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.331944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.331957] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.331986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.693 [2024-07-13 20:22:12.341803] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.693 [2024-07-13 20:22:12.341953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.693 [2024-07-13 20:22:12.341979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.693 [2024-07-13 20:22:12.341993] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.693 [2024-07-13 20:22:12.342006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.693 [2024-07-13 20:22:12.342034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.693 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.351924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.352066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.352092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.352120] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.352143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.352173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.361845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.362003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.362029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.362044] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.362062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.362091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.371837] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.371984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.372010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.372024] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.372036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.372065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.381886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.382059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.382086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.382101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.382117] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.382147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.391913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.392051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.392077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.392091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.392104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.392133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.401977] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.402128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.402154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.402168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.402180] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.402209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.411963] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.412110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.412136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.412150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.412164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.412194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.421976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.422116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.422142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.422156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.422170] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.422197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.432110] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.432252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.432278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.432298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.432312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.432343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.442037] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.442225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.442251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.442265] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.442278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.442306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.452103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.452250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.452275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.452295] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.452309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.452339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.953 [2024-07-13 20:22:12.462140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.953 [2024-07-13 20:22:12.462325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.953 [2024-07-13 20:22:12.462351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.953 [2024-07-13 20:22:12.462365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.953 [2024-07-13 20:22:12.462378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.953 [2024-07-13 20:22:12.462407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.953 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.472130] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.472318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.472343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.472357] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.472371] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.472399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.482198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.482349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.482374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.482389] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.482402] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.482430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.492185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.492356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.492381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.492395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.492408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.492436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.502212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.502351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.502376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.502390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.502404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.502432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.512283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.512419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.512444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.512458] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.512471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.512499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.522303] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.522507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.522533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.522548] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.522564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.522594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.532343] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.532499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.532525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.532540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.532553] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.532581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.542358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.542498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.542523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.542544] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.542558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.542587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.552372] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.552510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.552535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.552550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.552563] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.552591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.562466] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.562666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.562692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.562707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.562724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.562754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.572433] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.572574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.572600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.572615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.572628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.572656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.582440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.582587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.582613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.582627] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.582641] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.582669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.592486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.592635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.592660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.592675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.592688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.592716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:24.954 [2024-07-13 20:22:12.602496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:24.954 [2024-07-13 20:22:12.602637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:24.954 [2024-07-13 20:22:12.602663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:24.954 [2024-07-13 20:22:12.602677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:24.954 [2024-07-13 20:22:12.602690] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:24.954 [2024-07-13 20:22:12.602718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:24.954 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-13 20:22:12.612558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.214 [2024-07-13 20:22:12.612726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.214 [2024-07-13 20:22:12.612752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.214 [2024-07-13 20:22:12.612767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.214 [2024-07-13 20:22:12.612780] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.214 [2024-07-13 20:22:12.612809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-13 20:22:12.622584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.214 [2024-07-13 20:22:12.622729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.214 [2024-07-13 20:22:12.622755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.214 [2024-07-13 20:22:12.622769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.214 [2024-07-13 20:22:12.622782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.214 [2024-07-13 20:22:12.622810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-13 20:22:12.632626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.214 [2024-07-13 20:22:12.632762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.214 [2024-07-13 20:22:12.632788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.214 [2024-07-13 20:22:12.632809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.214 [2024-07-13 20:22:12.632823] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.214 [2024-07-13 20:22:12.632851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.214 qpair failed and we were unable to recover it. 00:34:25.214 [2024-07-13 20:22:12.642657] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.214 [2024-07-13 20:22:12.642802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.214 [2024-07-13 20:22:12.642828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.214 [2024-07-13 20:22:12.642842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.214 [2024-07-13 20:22:12.642855] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.642890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.652650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.652793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.652818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.652832] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.652845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.652882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.662678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.662815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.662840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.662855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.662874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.662905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.672693] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.672846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.672878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.672894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.672908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.672936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.682755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.682903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.682928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.682943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.682956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.682984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.692756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.692930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.692956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.692970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.692983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.693012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.702811] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.702963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.702990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.703004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.703017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.703045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.712807] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.712956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.712981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.712995] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.713007] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.713036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.722874] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.723014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.723044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.723059] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.723072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.723101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.732877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.733018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.733043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.733057] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.733071] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.733099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.742966] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.743133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.743158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.743172] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.743185] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.743216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.752960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.753096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.753121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.753135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.753148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.753176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.762981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.763137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.763162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.763177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.763191] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.763218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.772990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.773134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.773160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.773174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.773187] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.773215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.215 [2024-07-13 20:22:12.783032] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.215 [2024-07-13 20:22:12.783163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.215 [2024-07-13 20:22:12.783189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.215 [2024-07-13 20:22:12.783203] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.215 [2024-07-13 20:22:12.783216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.215 [2024-07-13 20:22:12.783244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.215 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.793069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.793223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.793248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.793263] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.793276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.793304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.803114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.803331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.803357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.803372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.803390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.803420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.813097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.813272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.813302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.813318] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.813331] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.813360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.823157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.823295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.823320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.823335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.823348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.823375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.833226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.833361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.833386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.833400] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.833413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.833441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.843225] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.843385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.843410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.843424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.843437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.843465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.853260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.853421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.853447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.853461] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.853474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.853508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.216 [2024-07-13 20:22:12.863380] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.216 [2024-07-13 20:22:12.863521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.216 [2024-07-13 20:22:12.863547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.216 [2024-07-13 20:22:12.863562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.216 [2024-07-13 20:22:12.863575] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.216 [2024-07-13 20:22:12.863603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.216 qpair failed and we were unable to recover it. 00:34:25.474 [2024-07-13 20:22:12.873346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.474 [2024-07-13 20:22:12.873482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.474 [2024-07-13 20:22:12.873508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.474 [2024-07-13 20:22:12.873523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.474 [2024-07-13 20:22:12.873536] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.474 [2024-07-13 20:22:12.873565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.474 qpair failed and we were unable to recover it. 00:34:25.474 [2024-07-13 20:22:12.883359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.474 [2024-07-13 20:22:12.883551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.474 [2024-07-13 20:22:12.883577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.474 [2024-07-13 20:22:12.883592] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.474 [2024-07-13 20:22:12.883605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.474 [2024-07-13 20:22:12.883633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.474 qpair failed and we were unable to recover it. 00:34:25.474 [2024-07-13 20:22:12.893353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.474 [2024-07-13 20:22:12.893494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.474 [2024-07-13 20:22:12.893519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.474 [2024-07-13 20:22:12.893534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.474 [2024-07-13 20:22:12.893547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.474 [2024-07-13 20:22:12.893575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.474 qpair failed and we were unable to recover it. 00:34:25.474 [2024-07-13 20:22:12.903416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.474 [2024-07-13 20:22:12.903582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.474 [2024-07-13 20:22:12.903613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.474 [2024-07-13 20:22:12.903630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.474 [2024-07-13 20:22:12.903643] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.474 [2024-07-13 20:22:12.903673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.474 qpair failed and we were unable to recover it. 00:34:25.474 [2024-07-13 20:22:12.913516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.474 [2024-07-13 20:22:12.913658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.474 [2024-07-13 20:22:12.913685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.913704] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.913719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.913748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.923471] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.923614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.923640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.923655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.923667] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.923696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.933505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.933645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.933669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.933684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.933697] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.933725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.943543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.943679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.943704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.943719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.943732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.943766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.953535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.953715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.953741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.953755] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.953768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.953796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.963606] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.963748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.963773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.963788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.963801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.963829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.973597] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.973732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.973757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.973771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.973784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.973813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.983650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.983795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.983820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.983835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.983848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.983882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:12.993669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:12.993809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:12.993839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:12.993854] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:12.993874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:12.993904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:13.003700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:13.003888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:13.003914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:13.003928] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:13.003941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:13.003969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:13.013711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:13.013849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:13.013882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:13.013897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:13.013910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:13.013938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:13.023739] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:13.023886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:13.023912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:13.023927] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:13.023941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:13.023969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:13.033796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:13.033961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:13.033988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:13.034002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:13.034020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:13.034049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:13.043804] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:13.043949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:13.043975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:13.043990] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:13.044003] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.475 [2024-07-13 20:22:13.044031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.475 qpair failed and we were unable to recover it. 00:34:25.475 [2024-07-13 20:22:13.053926] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.475 [2024-07-13 20:22:13.054087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.475 [2024-07-13 20:22:13.054112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.475 [2024-07-13 20:22:13.054126] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.475 [2024-07-13 20:22:13.054139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.054167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.063852] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.064045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.064070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.064084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.064097] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.064125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.073875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.074017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.074042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.074056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.074069] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.074098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.083938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.084101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.084127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.084141] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.084153] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.084181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.093959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.094106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.094131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.094146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.094158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.094186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.103960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.104103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.104128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.104143] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.104156] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.104183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.113987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.114120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.114146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.114160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.114173] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.114201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.476 [2024-07-13 20:22:13.124077] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.476 [2024-07-13 20:22:13.124223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.476 [2024-07-13 20:22:13.124247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.476 [2024-07-13 20:22:13.124261] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.476 [2024-07-13 20:22:13.124277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.476 [2024-07-13 20:22:13.124306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.476 qpair failed and we were unable to recover it. 00:34:25.734 [2024-07-13 20:22:13.134069] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.734 [2024-07-13 20:22:13.134240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.734 [2024-07-13 20:22:13.134267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.734 [2024-07-13 20:22:13.134281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.734 [2024-07-13 20:22:13.134295] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.734 [2024-07-13 20:22:13.134333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.734 qpair failed and we were unable to recover it. 00:34:25.734 [2024-07-13 20:22:13.144102] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.734 [2024-07-13 20:22:13.144253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.734 [2024-07-13 20:22:13.144279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.734 [2024-07-13 20:22:13.144294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.734 [2024-07-13 20:22:13.144307] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.734 [2024-07-13 20:22:13.144335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.734 qpair failed and we were unable to recover it. 00:34:25.734 [2024-07-13 20:22:13.154144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.734 [2024-07-13 20:22:13.154311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.734 [2024-07-13 20:22:13.154337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.734 [2024-07-13 20:22:13.154351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.734 [2024-07-13 20:22:13.154364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.734 [2024-07-13 20:22:13.154393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.734 qpair failed and we were unable to recover it. 00:34:25.734 [2024-07-13 20:22:13.164157] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.734 [2024-07-13 20:22:13.164322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.734 [2024-07-13 20:22:13.164347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.734 [2024-07-13 20:22:13.164362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.734 [2024-07-13 20:22:13.164375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.734 [2024-07-13 20:22:13.164405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.734 qpair failed and we were unable to recover it. 00:34:25.734 [2024-07-13 20:22:13.174219] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.734 [2024-07-13 20:22:13.174397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.734 [2024-07-13 20:22:13.174423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.734 [2024-07-13 20:22:13.174437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.174450] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.174479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.184259] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.184398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.184424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.184438] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.184451] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.184479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.194248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.194437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.194463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.194477] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.194489] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.194517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.204269] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.204412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.204438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.204452] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.204465] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.204493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.214334] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.214482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.214507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.214521] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.214540] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.214569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.224304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.224442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.224467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.224481] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.224494] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.224521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.234355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.234492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.234517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.234531] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.234544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.234572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.244431] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.244581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.244606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.244621] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.244634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.244662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.254425] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.254575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.254600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.254615] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.254627] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.254655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.264503] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.264646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.264672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.264686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.264700] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.264728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.274445] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.274583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.274608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.274622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.274635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.274663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.284520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.284680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.284705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.284720] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.284733] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.284760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.294507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.294649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.294674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.294688] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.294701] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.294729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.304563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.304696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.304722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.304742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.304756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.735 [2024-07-13 20:22:13.304784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.735 qpair failed and we were unable to recover it. 00:34:25.735 [2024-07-13 20:22:13.314569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.735 [2024-07-13 20:22:13.314711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.735 [2024-07-13 20:22:13.314736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.735 [2024-07-13 20:22:13.314751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.735 [2024-07-13 20:22:13.314764] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.314792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.324610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.324755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.324780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.324795] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.324808] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.324836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.334612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.334769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.334794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.334808] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.334821] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.334850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.344669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.344813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.344839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.344853] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.344872] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.344903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.354700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.354841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.354872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.354889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.354903] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.354933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.364714] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.364854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.364887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.364902] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.364915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.364944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.374761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.374941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.374967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.374981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.374994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.375022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.736 [2024-07-13 20:22:13.384789] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.736 [2024-07-13 20:22:13.384932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.736 [2024-07-13 20:22:13.384957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.736 [2024-07-13 20:22:13.384972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.736 [2024-07-13 20:22:13.384985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.736 [2024-07-13 20:22:13.385013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.736 qpair failed and we were unable to recover it. 00:34:25.996 [2024-07-13 20:22:13.394791] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.394979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.395008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.395029] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.395043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.395072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.404853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.405051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.405076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.405091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.405104] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.405133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.414893] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.415077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.415103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.415118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.415131] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.415160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.424902] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.425070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.425095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.425109] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.425122] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.425151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.434896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.435036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.435061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.435076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.435089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.435117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.444961] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.445114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.445140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.445154] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.445167] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.445196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.454973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.455119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.455144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.455159] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.455172] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.455200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.465002] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.465153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.465178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.465192] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.465205] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.465233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.475044] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.475203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.475228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.475242] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.475255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.475283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.485057] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.485206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.485235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.485251] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.485264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.485291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.495084] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.495225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.495250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.495264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.495277] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.495305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.505145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.505287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.505312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.505326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.505339] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.505367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.515226] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.515364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.515390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.515405] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.515418] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.515447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.525234] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.525406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.525431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.525445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.525458] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.997 [2024-07-13 20:22:13.525486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.997 qpair failed and we were unable to recover it. 00:34:25.997 [2024-07-13 20:22:13.535285] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.997 [2024-07-13 20:22:13.535430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.997 [2024-07-13 20:22:13.535455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.997 [2024-07-13 20:22:13.535469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.997 [2024-07-13 20:22:13.535483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.535511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.545253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.545417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.545442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.545457] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.545470] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.545498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.555251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.555443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.555469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.555483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.555496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.555523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.565306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.565488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.565513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.565528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.565541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.565569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.575315] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.575458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.575488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.575504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.575517] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.575548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.585336] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.585476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.585501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.585516] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.585529] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.585557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.595418] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.595558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.595583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.595597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.595610] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.595639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.605447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.605636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.605662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.605676] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.605689] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.605717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.615423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.615611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.615636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.615650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.615663] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.615696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.625434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.625570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.625596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.625610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.625623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.625651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.635500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.635680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.635705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.635719] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.635732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.635762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:25.998 [2024-07-13 20:22:13.645655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:25.998 [2024-07-13 20:22:13.645800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:25.998 [2024-07-13 20:22:13.645825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:25.998 [2024-07-13 20:22:13.645839] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:25.998 [2024-07-13 20:22:13.645852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:25.998 [2024-07-13 20:22:13.645886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:25.998 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.655529] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.655671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.655697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.655712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.655725] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.655755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.665571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.665724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.665756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.665771] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.665784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.665813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.675568] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.675750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.675776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.675790] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.675803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.675831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.685630] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.685774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.685799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.685814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.685826] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.685854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.695658] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.695846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.695878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.695894] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.695907] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.695936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.705680] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.705820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.705846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.705860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.705880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.705916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.715699] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.715840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.715872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.715889] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.715904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.715932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.725736] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.725930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.725955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.725970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.725983] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.726011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.735756] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.735903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.735929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.735943] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.735956] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.735984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.745786] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.745932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.745957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.745971] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.745986] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.746014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.755911] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.756052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.756082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.756097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.756110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.756139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.765881] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.766021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.766046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.766060] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.766073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.766101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.259 qpair failed and we were unable to recover it. 00:34:26.259 [2024-07-13 20:22:13.775895] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.259 [2024-07-13 20:22:13.776036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.259 [2024-07-13 20:22:13.776061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.259 [2024-07-13 20:22:13.776076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.259 [2024-07-13 20:22:13.776088] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.259 [2024-07-13 20:22:13.776118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.785928] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.786068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.786094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.786108] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.786121] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.786149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.795933] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.796072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.796097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.796111] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.796130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.796159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.806031] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.806176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.806201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.806215] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.806228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.806256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.816080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.816239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.816265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.816279] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.816292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.816321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.826019] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.826158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.826184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.826198] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.826211] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.826239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.836053] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.836192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.836217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.836231] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.836244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.836272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.846095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.846245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.846271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.846286] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.846299] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.846326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.856120] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.856264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.856289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.856303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.856317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.856345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.866140] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.866281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.866306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.866320] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.866334] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.866362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.876171] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.876327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.876352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.876367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.876381] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.876409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.886220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.886361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.886386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.886401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.886419] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.886448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.896215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.896358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.896383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.896397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.896409] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.896437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.260 [2024-07-13 20:22:13.906244] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.260 [2024-07-13 20:22:13.906378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.260 [2024-07-13 20:22:13.906403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.260 [2024-07-13 20:22:13.906417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.260 [2024-07-13 20:22:13.906430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.260 [2024-07-13 20:22:13.906457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.260 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.916331] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.916492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.916519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.916534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.916547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.916575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.926320] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.926465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.926492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.926507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.926520] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.926548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.936342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.936522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.936548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.936563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.936576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.936605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.946398] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.946543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.946569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.946584] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.946598] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.946626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.956417] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.956559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.956584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.956598] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.956612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.956640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.966424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.966566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.966591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.966606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.966620] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.966647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.976467] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.976612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.976637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.976651] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.976670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.976698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.986518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.986658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.986684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.521 [2024-07-13 20:22:13.986698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.521 [2024-07-13 20:22:13.986711] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.521 [2024-07-13 20:22:13.986739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.521 qpair failed and we were unable to recover it. 00:34:26.521 [2024-07-13 20:22:13.996551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.521 [2024-07-13 20:22:13.996755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.521 [2024-07-13 20:22:13.996783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:13.996798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:13.996815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.522 [2024-07-13 20:22:13.996845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.006572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.006720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.006746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.006760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.006773] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.522 [2024-07-13 20:22:14.006801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.016625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.016815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.016842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.016861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.016886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.522 [2024-07-13 20:22:14.016917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.026618] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.026761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.026787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.026802] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.026815] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x165d570 00:34:26.522 [2024-07-13 20:22:14.026844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.036663] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.036813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.036845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.036886] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.036912] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.036961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.046706] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.046881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.046910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.046937] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.046962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.047010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.056753] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.056909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.056937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.056960] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.056984] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.057031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.066770] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.066922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.066950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.066980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.067006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.067070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.076776] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.076920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.076949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.076973] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.076997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.077047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.086816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.086965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.086992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.087016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.087038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.087084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.096910] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.097055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.097082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.097106] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.097130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.097195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.106882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.107033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.107060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.107082] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.107107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.107157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.116896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.117067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.117095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.117118] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.117144] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.117205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.126903] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.127058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.127091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.127115] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.127137] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.127181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.522 qpair failed and we were unable to recover it. 00:34:26.522 [2024-07-13 20:22:14.136968] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.522 [2024-07-13 20:22:14.137140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.522 [2024-07-13 20:22:14.137169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.522 [2024-07-13 20:22:14.137193] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.522 [2024-07-13 20:22:14.137217] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.522 [2024-07-13 20:22:14.137278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-13 20:22:14.146953] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.523 [2024-07-13 20:22:14.147099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.523 [2024-07-13 20:22:14.147127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.523 [2024-07-13 20:22:14.147152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.523 [2024-07-13 20:22:14.147176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.523 [2024-07-13 20:22:14.147223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-13 20:22:14.157012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.523 [2024-07-13 20:22:14.157159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.523 [2024-07-13 20:22:14.157186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.523 [2024-07-13 20:22:14.157216] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.523 [2024-07-13 20:22:14.157242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.523 [2024-07-13 20:22:14.157288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.523 [2024-07-13 20:22:14.167038] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.523 [2024-07-13 20:22:14.167198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.523 [2024-07-13 20:22:14.167226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.523 [2024-07-13 20:22:14.167250] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.523 [2024-07-13 20:22:14.167287] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.523 [2024-07-13 20:22:14.167346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.523 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.177080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.177225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.177253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.177277] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.177301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f492c000b90 00:34:26.784 [2024-07-13 20:22:14.177348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.187147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.187331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.187365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.187380] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.187394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.187425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.197149] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.197328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.197357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.197372] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.197386] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.197430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.207165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.207318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.207345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.207360] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.207374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.207404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.217188] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.217336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.217362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.217377] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.217390] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.217421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.227241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.227379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.227406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.227421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.227434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.227465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.237346] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.237504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.237531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.237546] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.237559] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.237591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.247301] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.247464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.247497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.247513] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.247526] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.247556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.257283] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.784 [2024-07-13 20:22:14.257422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.784 [2024-07-13 20:22:14.257448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.784 [2024-07-13 20:22:14.257462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.784 [2024-07-13 20:22:14.257476] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.784 [2024-07-13 20:22:14.257506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.784 qpair failed and we were unable to recover it. 00:34:26.784 [2024-07-13 20:22:14.267335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.267521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.267548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.267562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.267576] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.267606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.277390] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.277567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.277594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.277609] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.277623] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.277664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.287349] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.287493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.287518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.287533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.287547] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.287584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.297385] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.297526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.297552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.297567] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.297581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.297611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.307446] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.307590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.307617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.307632] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.307647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.307677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.317472] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.317611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.317637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.317652] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.317665] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.317696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.327560] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.327703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.327730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.327744] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.327758] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.327788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.337486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.337629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.337660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.337675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.337688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.337720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.347514] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.347671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.347697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.347712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.347726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.347757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.357562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.357709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.357735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.357750] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.357763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.357793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.367614] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.367794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.367821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.367836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.367849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.367891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.377651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.377852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.377890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.377907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.377926] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.377961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.387655] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.387803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.387829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.387844] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.387856] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.387898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.397674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.397815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.785 [2024-07-13 20:22:14.397842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.785 [2024-07-13 20:22:14.397857] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.785 [2024-07-13 20:22:14.397880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.785 [2024-07-13 20:22:14.397912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.785 qpair failed and we were unable to recover it. 00:34:26.785 [2024-07-13 20:22:14.407745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.785 [2024-07-13 20:22:14.407924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.786 [2024-07-13 20:22:14.407951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.786 [2024-07-13 20:22:14.407966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.786 [2024-07-13 20:22:14.407980] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.786 [2024-07-13 20:22:14.408010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.786 qpair failed and we were unable to recover it. 00:34:26.786 [2024-07-13 20:22:14.417763] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.786 [2024-07-13 20:22:14.417951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.786 [2024-07-13 20:22:14.417977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.786 [2024-07-13 20:22:14.417992] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.786 [2024-07-13 20:22:14.418008] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.786 [2024-07-13 20:22:14.418039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.786 qpair failed and we were unable to recover it. 00:34:26.786 [2024-07-13 20:22:14.427876] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.786 [2024-07-13 20:22:14.428057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.786 [2024-07-13 20:22:14.428083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.786 [2024-07-13 20:22:14.428098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.786 [2024-07-13 20:22:14.428111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.786 [2024-07-13 20:22:14.428141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.786 qpair failed and we were unable to recover it. 00:34:26.786 [2024-07-13 20:22:14.437796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:26.786 [2024-07-13 20:22:14.437939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:26.786 [2024-07-13 20:22:14.437966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:26.786 [2024-07-13 20:22:14.437981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:26.786 [2024-07-13 20:22:14.437995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:26.786 [2024-07-13 20:22:14.438038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:26.786 qpair failed and we were unable to recover it. 00:34:27.045 [2024-07-13 20:22:14.447839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.045 [2024-07-13 20:22:14.447994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.045 [2024-07-13 20:22:14.448020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.045 [2024-07-13 20:22:14.448035] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.045 [2024-07-13 20:22:14.448049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.045 [2024-07-13 20:22:14.448081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.045 qpair failed and we were unable to recover it. 00:34:27.045 [2024-07-13 20:22:14.457870] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.045 [2024-07-13 20:22:14.458011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.045 [2024-07-13 20:22:14.458037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.045 [2024-07-13 20:22:14.458053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.045 [2024-07-13 20:22:14.458066] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.045 [2024-07-13 20:22:14.458096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.045 qpair failed and we were unable to recover it. 00:34:27.045 [2024-07-13 20:22:14.467863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.045 [2024-07-13 20:22:14.468068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.045 [2024-07-13 20:22:14.468093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.045 [2024-07-13 20:22:14.468113] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.045 [2024-07-13 20:22:14.468128] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.045 [2024-07-13 20:22:14.468159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.045 qpair failed and we were unable to recover it. 00:34:27.045 [2024-07-13 20:22:14.477947] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.045 [2024-07-13 20:22:14.478141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.045 [2024-07-13 20:22:14.478169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.045 [2024-07-13 20:22:14.478184] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.045 [2024-07-13 20:22:14.478198] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.045 [2024-07-13 20:22:14.478228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.045 qpair failed and we were unable to recover it. 00:34:27.045 [2024-07-13 20:22:14.487964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.045 [2024-07-13 20:22:14.488108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.488134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.488149] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.488162] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.488194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.498006] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.498179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.498207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.498223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.498240] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.498284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.508017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.508193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.508220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.508235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.508249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.508279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.518029] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.518218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.518244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.518259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.518272] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.518302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.528147] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.528292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.528317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.528332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.528346] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.528377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.538106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.538249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.538274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.538289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.538302] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.538334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.548173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.548320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.548346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.548361] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.548374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.548404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.558174] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.558336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.558362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.558383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.558398] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.558429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.568235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.568379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.568406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.568421] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.568434] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.568478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.578222] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.578359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.578385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.578399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.578413] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.578444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.588228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.588369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.588395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.588409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.588422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.588453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.598293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.598430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.598456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.598470] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.598483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.598514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.608288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.608430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.608455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.608469] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.608483] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.608512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.618341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.618485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.618511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.618526] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.046 [2024-07-13 20:22:14.618539] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.046 [2024-07-13 20:22:14.618571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.046 qpair failed and we were unable to recover it. 00:34:27.046 [2024-07-13 20:22:14.628379] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.046 [2024-07-13 20:22:14.628528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.046 [2024-07-13 20:22:14.628555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.046 [2024-07-13 20:22:14.628573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.628586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.628616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.638405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.638585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.638611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.638626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.638638] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.638668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.648435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.648577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.648608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.648624] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.648637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.648667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.658476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.658615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.658641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.658655] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.658669] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.658699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.668489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.668644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.668670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.668684] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.668698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.668730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.678556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.678713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.678743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.678758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.678771] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.678803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.688556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.688704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.688730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.688745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.688759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.688795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.047 [2024-07-13 20:22:14.698567] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.047 [2024-07-13 20:22:14.698707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.047 [2024-07-13 20:22:14.698734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.047 [2024-07-13 20:22:14.698749] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.047 [2024-07-13 20:22:14.698762] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.047 [2024-07-13 20:22:14.698792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.047 qpair failed and we were unable to recover it. 00:34:27.305 [2024-07-13 20:22:14.708589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.305 [2024-07-13 20:22:14.708724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.305 [2024-07-13 20:22:14.708750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.305 [2024-07-13 20:22:14.708764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.305 [2024-07-13 20:22:14.708777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.305 [2024-07-13 20:22:14.708809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.305 qpair failed and we were unable to recover it. 00:34:27.305 [2024-07-13 20:22:14.718669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.305 [2024-07-13 20:22:14.718834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.305 [2024-07-13 20:22:14.718860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.305 [2024-07-13 20:22:14.718884] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.305 [2024-07-13 20:22:14.718897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.305 [2024-07-13 20:22:14.718929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.305 qpair failed and we were unable to recover it. 00:34:27.305 [2024-07-13 20:22:14.728662] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.305 [2024-07-13 20:22:14.728814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.305 [2024-07-13 20:22:14.728840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.305 [2024-07-13 20:22:14.728855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.305 [2024-07-13 20:22:14.728877] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.305 [2024-07-13 20:22:14.728912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.305 qpair failed and we were unable to recover it. 00:34:27.305 [2024-07-13 20:22:14.738759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.305 [2024-07-13 20:22:14.738927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.305 [2024-07-13 20:22:14.738959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.305 [2024-07-13 20:22:14.738974] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.305 [2024-07-13 20:22:14.738987] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.305 [2024-07-13 20:22:14.739017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.305 qpair failed and we were unable to recover it. 00:34:27.305 [2024-07-13 20:22:14.748755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.305 [2024-07-13 20:22:14.748906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.748934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.748954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.748968] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.749000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.758796] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.758939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.758966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.758980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.758994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.759024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.768797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.768975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.769001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.769016] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.769028] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.769059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.778827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.778970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.778997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.779011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.779033] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.779064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.788839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.788982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.789008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.789022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.789036] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.789066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.798938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.799120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.799146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.799163] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.799176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.799207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.808917] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.809062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.809088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.809102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.809116] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.809146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.818964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.819111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.819136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.819151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.819164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.819194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.829008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.829194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.829220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.829235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.829249] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.829292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.839025] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.839162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.839187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.839202] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.839215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.839245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.849043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.849181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.849206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.849221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.849234] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.849264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.859068] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.859212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.859238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.859253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.859266] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.859297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.869098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.869260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.869286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.869301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.869320] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.869353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.879160] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.879301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.879327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.879341] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.879355] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.306 [2024-07-13 20:22:14.879386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.306 qpair failed and we were unable to recover it. 00:34:27.306 [2024-07-13 20:22:14.889158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.306 [2024-07-13 20:22:14.889340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.306 [2024-07-13 20:22:14.889366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.306 [2024-07-13 20:22:14.889381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.306 [2024-07-13 20:22:14.889394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.889424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.899203] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.899341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.899367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.899381] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.899395] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.899425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.909184] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.909325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.909351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.909365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.909378] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.909409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.919251] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.919389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.919415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.919429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.919443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.919484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.929291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.929438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.929464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.929479] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.929493] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.929534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.939323] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.939468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.939495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.939510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.939523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.939554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.949304] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.949441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.949467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.949482] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.949496] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.949526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.307 [2024-07-13 20:22:14.959377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.307 [2024-07-13 20:22:14.959516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.307 [2024-07-13 20:22:14.959542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.307 [2024-07-13 20:22:14.959563] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.307 [2024-07-13 20:22:14.959577] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.307 [2024-07-13 20:22:14.959620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.307 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:14.969375] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:14.969516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:14.969543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.565 [2024-07-13 20:22:14.969558] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.565 [2024-07-13 20:22:14.969571] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.565 [2024-07-13 20:22:14.969601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.565 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:14.979388] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:14.979529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:14.979554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.565 [2024-07-13 20:22:14.979569] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.565 [2024-07-13 20:22:14.979583] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.565 [2024-07-13 20:22:14.979614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.565 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:14.989474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:14.989624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:14.989649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.565 [2024-07-13 20:22:14.989664] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.565 [2024-07-13 20:22:14.989678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.565 [2024-07-13 20:22:14.989708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.565 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:14.999469] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:14.999656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:14.999682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.565 [2024-07-13 20:22:14.999697] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.565 [2024-07-13 20:22:14.999710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.565 [2024-07-13 20:22:14.999740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.565 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:15.009482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:15.009644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:15.009670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.565 [2024-07-13 20:22:15.009685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.565 [2024-07-13 20:22:15.009698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.565 [2024-07-13 20:22:15.009728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.565 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:15.019543] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:15.019723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:15.019749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.565 [2024-07-13 20:22:15.019764] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.565 [2024-07-13 20:22:15.019778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.565 [2024-07-13 20:22:15.019808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.565 qpair failed and we were unable to recover it. 00:34:27.565 [2024-07-13 20:22:15.029546] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.565 [2024-07-13 20:22:15.029693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.565 [2024-07-13 20:22:15.029720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.566 [2024-07-13 20:22:15.029735] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.566 [2024-07-13 20:22:15.029748] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.566 [2024-07-13 20:22:15.029790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.566 qpair failed and we were unable to recover it. 00:34:27.566 [2024-07-13 20:22:15.039592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:27.566 [2024-07-13 20:22:15.039739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:27.566 [2024-07-13 20:22:15.039765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:27.566 [2024-07-13 20:22:15.039780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:27.566 [2024-07-13 20:22:15.039794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4924000b90 00:34:27.566 [2024-07-13 20:22:15.039824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:27.566 qpair failed and we were unable to recover it. 00:34:27.566 [2024-07-13 20:22:15.039870] nvme_ctrlr.c:4353:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:27.566 A controller has encountered a failure and is being reset. 00:34:27.566 Controller properly reset. 00:34:30.118 Initializing NVMe Controllers 00:34:30.118 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:30.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:30.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:30.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:30.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:30.118 Initialization complete. Launching workers. 00:34:30.118 Starting thread on core 1 00:34:30.118 Starting thread on core 2 00:34:30.118 Starting thread on core 3 00:34:30.118 Starting thread on core 0 00:34:30.118 20:22:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:30.118 00:34:30.118 real 0m10.691s 00:34:30.118 user 0m23.778s 00:34:30.118 sys 0m6.162s 00:34:30.118 20:22:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:30.118 20:22:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.118 ************************************ 00:34:30.119 END TEST nvmf_target_disconnect_tc2 00:34:30.119 ************************************ 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:30.119 rmmod nvme_tcp 00:34:30.119 rmmod nvme_fabrics 00:34:30.119 rmmod nvme_keyring 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3353475 ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3353475 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3353475 ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 3353475 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3353475 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3353475' 00:34:30.119 killing process with pid 3353475 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 3353475 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 3353475 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:30.119 20:22:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.657 20:22:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:32.657 00:34:32.657 real 0m15.396s 00:34:32.657 user 0m49.370s 00:34:32.657 sys 0m8.308s 00:34:32.657 20:22:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:32.657 20:22:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:32.657 ************************************ 00:34:32.657 END TEST nvmf_target_disconnect 00:34:32.657 ************************************ 00:34:32.657 20:22:19 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:32.657 20:22:19 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.657 20:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.657 20:22:19 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:32.657 00:34:32.657 real 26m59.516s 00:34:32.657 user 73m51.829s 00:34:32.657 sys 6m30.252s 00:34:32.657 20:22:19 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:32.657 20:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.657 ************************************ 00:34:32.657 END TEST nvmf_tcp 00:34:32.657 ************************************ 00:34:32.657 20:22:19 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:32.657 20:22:19 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:32.657 20:22:19 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:32.657 20:22:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:32.657 20:22:19 -- common/autotest_common.sh@10 -- # set +x 00:34:32.657 ************************************ 00:34:32.657 START TEST spdkcli_nvmf_tcp 00:34:32.657 ************************************ 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:32.657 * Looking for test storage... 00:34:32.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3354555 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3354555 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 3354555 ']' 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:32.657 20:22:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.657 [2024-07-13 20:22:19.935283] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:32.658 [2024-07-13 20:22:19.935370] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354555 ] 00:34:32.658 EAL: No free 2048 kB hugepages reported on node 1 00:34:32.658 [2024-07-13 20:22:20.000442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:32.658 [2024-07-13 20:22:20.097898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.658 [2024-07-13 20:22:20.097913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.658 20:22:20 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:32.658 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:32.658 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:32.658 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:32.658 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:32.658 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:32.658 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:32.658 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:32.658 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:32.658 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:32.658 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:32.658 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:32.658 ' 00:34:35.193 [2024-07-13 20:22:22.803823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:36.569 [2024-07-13 20:22:24.040165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:39.093 [2024-07-13 20:22:26.323422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:40.990 [2024-07-13 20:22:28.281567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:42.364 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:42.364 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:42.364 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:42.364 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:42.364 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:42.364 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:42.364 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:42.364 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.364 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.364 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:42.364 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:42.364 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:42.364 20:22:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:42.931 20:22:30 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:42.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:42.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:42.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:42.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:42.931 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:42.931 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:42.931 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:42.931 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:42.931 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:42.931 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:42.931 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:42.931 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:42.931 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:42.931 ' 00:34:48.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:48.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:48.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:48.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:48.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:48.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:48.203 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:48.203 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:48.203 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:48.203 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:48.203 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:48.203 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:48.203 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:48.203 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3354555 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3354555 ']' 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3354555 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3354555 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:48.203 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3354555' 00:34:48.203 killing process with pid 3354555 00:34:48.204 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 3354555 00:34:48.204 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 3354555 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3354555 ']' 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3354555 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 3354555 ']' 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 3354555 00:34:48.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3354555) - No such process 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 3354555 is not found' 00:34:48.464 Process with pid 3354555 is not found 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:48.464 00:34:48.464 real 0m16.047s 00:34:48.464 user 0m33.915s 00:34:48.464 sys 0m0.824s 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:48.464 20:22:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 ************************************ 00:34:48.464 END TEST spdkcli_nvmf_tcp 00:34:48.464 ************************************ 00:34:48.464 20:22:35 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:48.464 20:22:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:48.464 20:22:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:48.464 20:22:35 -- common/autotest_common.sh@10 -- # set +x 00:34:48.464 ************************************ 00:34:48.464 START TEST nvmf_identify_passthru 00:34:48.464 ************************************ 00:34:48.464 20:22:35 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:48.464 * Looking for test storage... 00:34:48.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:48.464 20:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.464 20:22:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.464 20:22:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.464 20:22:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.464 20:22:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.464 20:22:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.464 20:22:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.464 20:22:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:48.464 20:22:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:48.464 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:48.465 20:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:48.465 20:22:35 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:48.465 20:22:35 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:48.465 20:22:35 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:48.465 20:22:35 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.465 20:22:35 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.465 20:22:35 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.465 20:22:35 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:48.465 20:22:35 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:48.465 20:22:35 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:48.465 20:22:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:48.465 20:22:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:48.465 20:22:35 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:48.465 20:22:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.371 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:50.371 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:50.372 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:50.372 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:50.372 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.372 20:22:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.372 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:50.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:34:50.633 00:34:50.633 --- 10.0.0.2 ping statistics --- 00:34:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.633 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:34:50.633 00:34:50.633 --- 10.0.0.1 ping statistics --- 00:34:50.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.633 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:50.633 20:22:38 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:34:50.633 20:22:38 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:50.633 20:22:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:50.633 EAL: No free 2048 kB hugepages reported on node 1 00:34:54.828 20:22:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:54.828 20:22:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:54.828 20:22:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:54.828 20:22:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:54.828 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3359161 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:59.014 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3359161 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 3359161 ']' 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:59.014 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.014 [2024-07-13 20:22:46.634904] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:34:59.014 [2024-07-13 20:22:46.634995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.014 EAL: No free 2048 kB hugepages reported on node 1 00:34:59.274 [2024-07-13 20:22:46.699969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:59.274 [2024-07-13 20:22:46.787237] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.274 [2024-07-13 20:22:46.787302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.274 [2024-07-13 20:22:46.787331] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.274 [2024-07-13 20:22:46.787342] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.274 [2024-07-13 20:22:46.787352] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.274 [2024-07-13 20:22:46.787406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.274 [2024-07-13 20:22:46.787437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:59.274 [2024-07-13 20:22:46.787492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:34:59.274 [2024-07-13 20:22:46.787494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:34:59.274 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.274 INFO: Log level set to 20 00:34:59.274 INFO: Requests: 00:34:59.274 { 00:34:59.274 "jsonrpc": "2.0", 00:34:59.274 "method": "nvmf_set_config", 00:34:59.274 "id": 1, 00:34:59.274 "params": { 00:34:59.274 "admin_cmd_passthru": { 00:34:59.274 "identify_ctrlr": true 00:34:59.274 } 00:34:59.274 } 00:34:59.274 } 00:34:59.274 00:34:59.274 INFO: response: 00:34:59.274 { 00:34:59.274 "jsonrpc": "2.0", 00:34:59.274 "id": 1, 00:34:59.274 "result": true 00:34:59.274 } 00:34:59.274 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.274 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.274 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.274 INFO: Setting log level to 20 00:34:59.274 INFO: Setting log level to 20 00:34:59.274 INFO: Log level set to 20 00:34:59.274 INFO: Log level set to 20 00:34:59.274 INFO: Requests: 00:34:59.274 { 00:34:59.274 "jsonrpc": "2.0", 00:34:59.274 "method": "framework_start_init", 00:34:59.274 "id": 1 00:34:59.274 } 00:34:59.274 00:34:59.274 INFO: Requests: 00:34:59.274 { 00:34:59.274 "jsonrpc": "2.0", 00:34:59.274 "method": "framework_start_init", 00:34:59.274 "id": 1 00:34:59.274 } 00:34:59.274 00:34:59.535 [2024-07-13 20:22:46.965270] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:59.535 INFO: response: 00:34:59.535 { 00:34:59.535 "jsonrpc": "2.0", 00:34:59.535 "id": 1, 00:34:59.535 "result": true 00:34:59.535 } 00:34:59.535 00:34:59.535 INFO: response: 00:34:59.535 { 00:34:59.535 "jsonrpc": "2.0", 00:34:59.535 "id": 1, 00:34:59.535 "result": true 00:34:59.535 } 00:34:59.535 00:34:59.535 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.535 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:59.535 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.535 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.535 INFO: Setting log level to 40 00:34:59.535 INFO: Setting log level to 40 00:34:59.535 INFO: Setting log level to 40 00:34:59.535 [2024-07-13 20:22:46.975350] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:59.535 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:59.535 20:22:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:59.535 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.535 20:22:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:59.535 20:22:47 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:59.535 20:22:47 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:59.535 20:22:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.825 Nvme0n1 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.825 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.825 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.825 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.825 [2024-07-13 20:22:49.871330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.825 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.825 [ 00:35:02.825 { 00:35:02.825 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:02.825 "subtype": "Discovery", 00:35:02.825 "listen_addresses": [], 00:35:02.825 "allow_any_host": true, 00:35:02.825 "hosts": [] 00:35:02.825 }, 00:35:02.825 { 00:35:02.825 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.825 "subtype": "NVMe", 00:35:02.825 "listen_addresses": [ 00:35:02.825 { 00:35:02.825 "trtype": "TCP", 00:35:02.825 "adrfam": "IPv4", 00:35:02.825 "traddr": "10.0.0.2", 00:35:02.825 "trsvcid": "4420" 00:35:02.825 } 00:35:02.825 ], 00:35:02.825 "allow_any_host": true, 00:35:02.825 "hosts": [], 00:35:02.825 "serial_number": "SPDK00000000000001", 00:35:02.825 "model_number": "SPDK bdev Controller", 00:35:02.825 "max_namespaces": 1, 00:35:02.825 "min_cntlid": 1, 00:35:02.825 "max_cntlid": 65519, 00:35:02.825 "namespaces": [ 00:35:02.825 { 00:35:02.825 "nsid": 1, 00:35:02.825 "bdev_name": "Nvme0n1", 00:35:02.825 "name": "Nvme0n1", 00:35:02.825 "nguid": "6B823D590E4F46E1A6C876F734AB75C1", 00:35:02.825 "uuid": "6b823d59-0e4f-46e1-a6c8-76f734ab75c1" 00:35:02.825 } 00:35:02.825 ] 00:35:02.825 } 00:35:02.825 ] 00:35:02.825 20:22:49 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.826 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:02.826 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:02.826 20:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:02.826 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:02.826 EAL: No free 2048 kB hugepages reported on node 1 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:02.826 20:22:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:02.826 rmmod nvme_tcp 00:35:02.826 rmmod nvme_fabrics 00:35:02.826 rmmod nvme_keyring 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3359161 ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3359161 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 3359161 ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 3359161 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3359161 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3359161' 00:35:02.826 killing process with pid 3359161 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 3359161 00:35:02.826 20:22:50 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 3359161 00:35:04.230 20:22:51 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:04.230 20:22:51 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:04.230 20:22:51 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:04.230 20:22:51 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:04.230 20:22:51 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:04.230 20:22:51 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.230 20:22:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.230 20:22:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.761 20:22:53 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:06.761 00:35:06.761 real 0m18.000s 00:35:06.761 user 0m26.681s 00:35:06.761 sys 0m2.359s 00:35:06.761 20:22:53 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:06.761 20:22:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:06.761 ************************************ 00:35:06.761 END TEST nvmf_identify_passthru 00:35:06.761 ************************************ 00:35:06.761 20:22:53 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:06.761 20:22:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:06.761 20:22:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:06.761 20:22:53 -- common/autotest_common.sh@10 -- # set +x 00:35:06.761 ************************************ 00:35:06.761 START TEST nvmf_dif 00:35:06.761 ************************************ 00:35:06.761 20:22:53 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:06.762 * Looking for test storage... 00:35:06.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.762 20:22:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.762 20:22:54 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.762 20:22:54 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.762 20:22:54 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.762 20:22:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.762 20:22:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.762 20:22:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.762 20:22:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:06.762 20:22:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:06.762 20:22:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:06.762 20:22:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:06.762 20:22:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:06.762 20:22:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:06.762 20:22:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.762 20:22:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.762 20:22:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:06.762 20:22:54 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:06.762 20:22:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:08.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:08.666 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:08.666 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:08.666 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:08.666 20:22:55 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:08.666 20:22:56 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:08.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:08.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:35:08.666 00:35:08.667 --- 10.0.0.2 ping statistics --- 00:35:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.667 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:35:08.667 20:22:56 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:08.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:08.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:35:08.667 00:35:08.667 --- 10.0.0.1 ping statistics --- 00:35:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:08.667 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:35:08.667 20:22:56 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:08.667 20:22:56 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:08.667 20:22:56 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:08.667 20:22:56 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:09.604 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:09.604 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:09.604 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:09.604 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:09.604 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:09.604 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:09.604 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:09.604 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:09.604 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:09.604 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:09.604 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:09.604 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:09.604 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:09.604 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:09.604 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:09.604 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:09.604 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:09.604 20:22:57 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:09.604 20:22:57 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:09.604 20:22:57 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:09.604 20:22:57 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:09.604 20:22:57 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:09.604 20:22:57 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:09.864 20:22:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:09.864 20:22:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:09.864 20:22:57 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:09.864 20:22:57 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3362304 00:35:09.864 20:22:57 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:09.864 20:22:57 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3362304 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 3362304 ']' 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:09.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:09.864 20:22:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:09.864 [2024-07-13 20:22:57.320316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:35:09.864 [2024-07-13 20:22:57.320395] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:09.864 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.864 [2024-07-13 20:22:57.384180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.864 [2024-07-13 20:22:57.471053] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:09.864 [2024-07-13 20:22:57.471110] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:09.864 [2024-07-13 20:22:57.471141] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:09.864 [2024-07-13 20:22:57.471153] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:09.864 [2024-07-13 20:22:57.471165] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:09.864 [2024-07-13 20:22:57.471207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:35:10.123 20:22:57 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 20:22:57 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:10.123 20:22:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:10.123 20:22:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 [2024-07-13 20:22:57.610307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.123 20:22:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:10.123 20:22:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 ************************************ 00:35:10.123 START TEST fio_dif_1_default 00:35:10.123 ************************************ 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.123 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.123 bdev_null0 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:10.124 [2024-07-13 20:22:57.670613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.124 { 00:35:10.124 "params": { 00:35:10.124 "name": "Nvme$subsystem", 00:35:10.124 "trtype": "$TEST_TRANSPORT", 00:35:10.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.124 "adrfam": "ipv4", 00:35:10.124 "trsvcid": "$NVMF_PORT", 00:35:10.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.124 "hdgst": ${hdgst:-false}, 00:35:10.124 "ddgst": ${ddgst:-false} 00:35:10.124 }, 00:35:10.124 "method": "bdev_nvme_attach_controller" 00:35:10.124 } 00:35:10.124 EOF 00:35:10.124 )") 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:10.124 "params": { 00:35:10.124 "name": "Nvme0", 00:35:10.124 "trtype": "tcp", 00:35:10.124 "traddr": "10.0.0.2", 00:35:10.124 "adrfam": "ipv4", 00:35:10.124 "trsvcid": "4420", 00:35:10.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.124 "hdgst": false, 00:35:10.124 "ddgst": false 00:35:10.124 }, 00:35:10.124 "method": "bdev_nvme_attach_controller" 00:35:10.124 }' 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.124 20:22:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.382 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:10.382 fio-3.35 00:35:10.382 Starting 1 thread 00:35:10.382 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.572 00:35:22.572 filename0: (groupid=0, jobs=1): err= 0: pid=3362534: Sat Jul 13 20:23:08 2024 00:35:22.572 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10021msec) 00:35:22.572 slat (nsec): min=4365, max=30121, avg=9422.63, stdev=2814.31 00:35:22.572 clat (usec): min=40891, max=45921, avg=41208.93, stdev=516.24 00:35:22.572 lat (usec): min=40899, max=45935, avg=41218.35, stdev=516.27 00:35:22.572 clat percentiles (usec): 00:35:22.572 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:22.572 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:22.572 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:35:22.572 | 99.00th=[42206], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:35:22.572 | 99.99th=[45876] 00:35:22.572 bw ( KiB/s): min= 384, max= 416, per=99.75%, avg=387.20, stdev= 9.85, samples=20 00:35:22.572 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:35:22.572 lat (msec) : 50=100.00% 00:35:22.572 cpu : usr=89.25%, sys=10.50%, ctx=16, majf=0, minf=226 00:35:22.572 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.572 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.572 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:22.572 00:35:22.572 Run status group 0 (all jobs): 00:35:22.572 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10021-10021msec 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 00:35:22.572 real 0m11.176s 00:35:22.572 user 0m10.122s 00:35:22.572 sys 0m1.327s 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 ************************************ 00:35:22.572 END TEST fio_dif_1_default 00:35:22.572 ************************************ 00:35:22.572 20:23:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:22.572 20:23:08 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:22.572 20:23:08 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 ************************************ 00:35:22.572 START TEST fio_dif_1_multi_subsystems 00:35:22.572 ************************************ 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 bdev_null0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 [2024-07-13 20:23:08.900281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 bdev_null1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.572 { 00:35:22.572 "params": { 00:35:22.572 "name": "Nvme$subsystem", 00:35:22.572 "trtype": "$TEST_TRANSPORT", 00:35:22.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.572 "adrfam": "ipv4", 00:35:22.572 "trsvcid": "$NVMF_PORT", 00:35:22.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.572 "hdgst": ${hdgst:-false}, 00:35:22.572 "ddgst": ${ddgst:-false} 00:35:22.572 }, 00:35:22.572 "method": "bdev_nvme_attach_controller" 00:35:22.572 } 00:35:22.572 EOF 00:35:22.572 )") 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:22.572 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.573 { 00:35:22.573 "params": { 00:35:22.573 "name": "Nvme$subsystem", 00:35:22.573 "trtype": "$TEST_TRANSPORT", 00:35:22.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.573 "adrfam": "ipv4", 00:35:22.573 "trsvcid": "$NVMF_PORT", 00:35:22.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.573 "hdgst": ${hdgst:-false}, 00:35:22.573 "ddgst": ${ddgst:-false} 00:35:22.573 }, 00:35:22.573 "method": "bdev_nvme_attach_controller" 00:35:22.573 } 00:35:22.573 EOF 00:35:22.573 )") 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:22.573 "params": { 00:35:22.573 "name": "Nvme0", 00:35:22.573 "trtype": "tcp", 00:35:22.573 "traddr": "10.0.0.2", 00:35:22.573 "adrfam": "ipv4", 00:35:22.573 "trsvcid": "4420", 00:35:22.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.573 "hdgst": false, 00:35:22.573 "ddgst": false 00:35:22.573 }, 00:35:22.573 "method": "bdev_nvme_attach_controller" 00:35:22.573 },{ 00:35:22.573 "params": { 00:35:22.573 "name": "Nvme1", 00:35:22.573 "trtype": "tcp", 00:35:22.573 "traddr": "10.0.0.2", 00:35:22.573 "adrfam": "ipv4", 00:35:22.573 "trsvcid": "4420", 00:35:22.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:22.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:22.573 "hdgst": false, 00:35:22.573 "ddgst": false 00:35:22.573 }, 00:35:22.573 "method": "bdev_nvme_attach_controller" 00:35:22.573 }' 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.573 20:23:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.573 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:22.573 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:22.573 fio-3.35 00:35:22.573 Starting 2 threads 00:35:22.573 EAL: No free 2048 kB hugepages reported on node 1 00:35:32.533 00:35:32.533 filename0: (groupid=0, jobs=1): err= 0: pid=3363932: Sat Jul 13 20:23:19 2024 00:35:32.533 read: IOPS=189, BW=757KiB/s (776kB/s)(7584KiB/10013msec) 00:35:32.533 slat (nsec): min=4776, max=19008, avg=9522.94, stdev=2512.56 00:35:32.533 clat (usec): min=813, max=45009, avg=21092.65, stdev=20119.67 00:35:32.533 lat (usec): min=821, max=45021, avg=21102.17, stdev=20119.49 00:35:32.533 clat percentiles (usec): 00:35:32.533 | 1.00th=[ 832], 5.00th=[ 857], 10.00th=[ 865], 20.00th=[ 881], 00:35:32.533 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[41157], 60.00th=[41157], 00:35:32.533 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:32.533 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:32.533 | 99.99th=[44827] 00:35:32.533 bw ( KiB/s): min= 672, max= 768, per=66.40%, avg=756.80, stdev=28.00, samples=20 00:35:32.533 iops : min= 168, max= 192, avg=189.20, stdev= 7.00, samples=20 00:35:32.533 lat (usec) : 1000=49.58% 00:35:32.533 lat (msec) : 2=0.21%, 50=50.21% 00:35:32.533 cpu : usr=93.96%, sys=4.97%, ctx=29, majf=0, minf=78 00:35:32.533 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.533 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.533 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:32.533 filename1: (groupid=0, jobs=1): err= 0: pid=3363933: Sat Jul 13 20:23:19 2024 00:35:32.533 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10019msec) 00:35:32.533 slat (nsec): min=4336, max=40943, avg=9719.89, stdev=3040.96 00:35:32.533 clat (usec): min=40947, max=45102, avg=41886.32, stdev=392.59 00:35:32.533 lat (usec): min=40955, max=45117, avg=41896.04, stdev=392.67 00:35:32.533 clat percentiles (usec): 00:35:32.533 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[42206], 00:35:32.533 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:32.533 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:32.533 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:35:32.533 | 99.99th=[45351] 00:35:32.533 bw ( KiB/s): min= 352, max= 384, per=33.37%, avg=380.80, stdev= 9.85, samples=20 00:35:32.533 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:35:32.533 lat (msec) : 50=100.00% 00:35:32.533 cpu : usr=94.41%, sys=5.19%, ctx=53, majf=0, minf=159 00:35:32.533 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:32.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:32.533 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:32.533 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:32.533 00:35:32.533 Run status group 0 (all jobs): 00:35:32.533 READ: bw=1139KiB/s (1166kB/s), 382KiB/s-757KiB/s (391kB/s-776kB/s), io=11.1MiB (11.7MB), run=10013-10019msec 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.791 00:35:32.791 real 0m11.363s 00:35:32.791 user 0m20.339s 00:35:32.791 sys 0m1.314s 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:32.791 20:23:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:32.791 ************************************ 00:35:32.791 END TEST fio_dif_1_multi_subsystems 00:35:32.791 ************************************ 00:35:32.791 20:23:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:32.791 20:23:20 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:32.791 20:23:20 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:32.792 20:23:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.792 ************************************ 00:35:32.792 START TEST fio_dif_rand_params 00:35:32.792 ************************************ 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.792 bdev_null0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:32.792 [2024-07-13 20:23:20.314830] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:32.792 { 00:35:32.792 "params": { 00:35:32.792 "name": "Nvme$subsystem", 00:35:32.792 "trtype": "$TEST_TRANSPORT", 00:35:32.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:32.792 "adrfam": "ipv4", 00:35:32.792 "trsvcid": "$NVMF_PORT", 00:35:32.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:32.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:32.792 "hdgst": ${hdgst:-false}, 00:35:32.792 "ddgst": ${ddgst:-false} 00:35:32.792 }, 00:35:32.792 "method": "bdev_nvme_attach_controller" 00:35:32.792 } 00:35:32.792 EOF 00:35:32.792 )") 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:32.792 "params": { 00:35:32.792 "name": "Nvme0", 00:35:32.792 "trtype": "tcp", 00:35:32.792 "traddr": "10.0.0.2", 00:35:32.792 "adrfam": "ipv4", 00:35:32.792 "trsvcid": "4420", 00:35:32.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:32.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:32.792 "hdgst": false, 00:35:32.792 "ddgst": false 00:35:32.792 }, 00:35:32.792 "method": "bdev_nvme_attach_controller" 00:35:32.792 }' 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:32.792 20:23:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:33.050 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:33.050 ... 00:35:33.050 fio-3.35 00:35:33.050 Starting 3 threads 00:35:33.050 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.650 00:35:39.650 filename0: (groupid=0, jobs=1): err= 0: pid=3365328: Sat Jul 13 20:23:26 2024 00:35:39.650 read: IOPS=190, BW=23.8MiB/s (25.0MB/s)(120MiB/5043msec) 00:35:39.650 slat (nsec): min=4483, max=72507, avg=12849.88, stdev=2713.74 00:35:39.650 clat (usec): min=5556, max=96330, avg=15698.48, stdev=15357.60 00:35:39.650 lat (usec): min=5568, max=96344, avg=15711.33, stdev=15357.78 00:35:39.650 clat percentiles (usec): 00:35:39.650 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 8094], 00:35:39.650 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10945], 00:35:39.650 | 70.00th=[12518], 80.00th=[13698], 90.00th=[50594], 95.00th=[53216], 00:35:39.650 | 99.00th=[56886], 99.50th=[93848], 99.90th=[95945], 99.95th=[95945], 00:35:39.650 | 99.99th=[95945] 00:35:39.650 bw ( KiB/s): min=15360, max=32512, per=31.63%, avg=24524.80, stdev=5629.15, samples=10 00:35:39.650 iops : min= 120, max= 254, avg=191.60, stdev=43.98, samples=10 00:35:39.650 lat (msec) : 10=50.73%, 20=36.46%, 50=1.77%, 100=11.04% 00:35:39.650 cpu : usr=93.24%, sys=6.33%, ctx=9, majf=0, minf=142 00:35:39.650 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.650 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:39.650 filename0: (groupid=0, jobs=1): err= 0: pid=3365329: Sat Jul 13 20:23:26 2024 00:35:39.650 read: IOPS=203, BW=25.5MiB/s (26.7MB/s)(128MiB/5032msec) 00:35:39.650 slat (nsec): min=5249, max=29919, avg=14921.30, stdev=3270.15 00:35:39.650 clat (usec): min=4984, max=55603, avg=14701.91, stdev=13877.14 00:35:39.650 lat (usec): min=4997, max=55618, avg=14716.83, stdev=13877.54 00:35:39.650 clat percentiles (usec): 00:35:39.650 | 1.00th=[ 5538], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7898], 00:35:39.650 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10421], 00:35:39.650 | 70.00th=[11731], 80.00th=[12780], 90.00th=[49546], 95.00th=[51119], 00:35:39.650 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:35:39.650 | 99.99th=[55837] 00:35:39.650 bw ( KiB/s): min=16384, max=35840, per=33.75%, avg=26168.10, stdev=6853.95, samples=10 00:35:39.650 iops : min= 128, max= 280, avg=204.40, stdev=53.56, samples=10 00:35:39.650 lat (msec) : 10=56.59%, 20=30.83%, 50=4.10%, 100=8.49% 00:35:39.650 cpu : usr=91.89%, sys=7.53%, ctx=12, majf=0, minf=122 00:35:39.650 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.650 issued rwts: total=1025,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:39.650 filename0: (groupid=0, jobs=1): err= 0: pid=3365330: Sat Jul 13 20:23:26 2024 00:35:39.650 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(134MiB/5045msec) 00:35:39.650 slat (nsec): min=4351, max=24562, avg=13396.77, stdev=2160.47 00:35:39.650 clat (usec): min=5081, max=92420, avg=14034.08, stdev=13767.18 00:35:39.650 lat (usec): min=5094, max=92435, avg=14047.48, stdev=13767.19 00:35:39.650 clat percentiles (usec): 00:35:39.650 | 1.00th=[ 5669], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 7898], 00:35:39.650 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[10028], 00:35:39.650 | 70.00th=[11338], 80.00th=[12387], 90.00th=[48497], 95.00th=[51119], 00:35:39.650 | 99.00th=[53740], 99.50th=[54264], 99.90th=[92799], 99.95th=[92799], 00:35:39.650 | 99.99th=[92799] 00:35:39.650 bw ( KiB/s): min=16128, max=39168, per=35.26%, avg=27340.80, stdev=8587.60, samples=10 00:35:39.650 iops : min= 126, max= 306, avg=213.60, stdev=67.09, samples=10 00:35:39.650 lat (msec) : 10=59.94%, 20=29.13%, 50=3.27%, 100=7.66% 00:35:39.650 cpu : usr=93.42%, sys=6.05%, ctx=7, majf=0, minf=57 00:35:39.650 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.650 issued rwts: total=1071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.650 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:39.650 00:35:39.650 Run status group 0 (all jobs): 00:35:39.650 READ: bw=75.7MiB/s (79.4MB/s), 23.8MiB/s-26.5MiB/s (25.0MB/s-27.8MB/s), io=382MiB (401MB), run=5032-5045msec 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:39.650 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 bdev_null0 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 [2024-07-13 20:23:26.566086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 bdev_null1 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 bdev_null2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:39.651 { 00:35:39.651 "params": { 00:35:39.651 "name": "Nvme$subsystem", 00:35:39.651 "trtype": "$TEST_TRANSPORT", 00:35:39.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.651 "adrfam": "ipv4", 00:35:39.651 "trsvcid": "$NVMF_PORT", 00:35:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.651 "hdgst": ${hdgst:-false}, 00:35:39.651 "ddgst": ${ddgst:-false} 00:35:39.651 }, 00:35:39.651 "method": "bdev_nvme_attach_controller" 00:35:39.651 } 00:35:39.651 EOF 00:35:39.651 )") 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:39.651 { 00:35:39.651 "params": { 00:35:39.651 "name": "Nvme$subsystem", 00:35:39.651 "trtype": "$TEST_TRANSPORT", 00:35:39.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.651 "adrfam": "ipv4", 00:35:39.651 "trsvcid": "$NVMF_PORT", 00:35:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.651 "hdgst": ${hdgst:-false}, 00:35:39.651 "ddgst": ${ddgst:-false} 00:35:39.651 }, 00:35:39.651 "method": "bdev_nvme_attach_controller" 00:35:39.651 } 00:35:39.651 EOF 00:35:39.651 )") 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:39.651 { 00:35:39.651 "params": { 00:35:39.651 "name": "Nvme$subsystem", 00:35:39.651 "trtype": "$TEST_TRANSPORT", 00:35:39.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.651 "adrfam": "ipv4", 00:35:39.651 "trsvcid": "$NVMF_PORT", 00:35:39.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.651 "hdgst": ${hdgst:-false}, 00:35:39.651 "ddgst": ${ddgst:-false} 00:35:39.651 }, 00:35:39.651 "method": "bdev_nvme_attach_controller" 00:35:39.651 } 00:35:39.651 EOF 00:35:39.651 )") 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:39.651 20:23:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:39.651 "params": { 00:35:39.651 "name": "Nvme0", 00:35:39.651 "trtype": "tcp", 00:35:39.651 "traddr": "10.0.0.2", 00:35:39.651 "adrfam": "ipv4", 00:35:39.652 "trsvcid": "4420", 00:35:39.652 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.652 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.652 "hdgst": false, 00:35:39.652 "ddgst": false 00:35:39.652 }, 00:35:39.652 "method": "bdev_nvme_attach_controller" 00:35:39.652 },{ 00:35:39.652 "params": { 00:35:39.652 "name": "Nvme1", 00:35:39.652 "trtype": "tcp", 00:35:39.652 "traddr": "10.0.0.2", 00:35:39.652 "adrfam": "ipv4", 00:35:39.652 "trsvcid": "4420", 00:35:39.652 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.652 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.652 "hdgst": false, 00:35:39.652 "ddgst": false 00:35:39.652 }, 00:35:39.652 "method": "bdev_nvme_attach_controller" 00:35:39.652 },{ 00:35:39.652 "params": { 00:35:39.652 "name": "Nvme2", 00:35:39.652 "trtype": "tcp", 00:35:39.652 "traddr": "10.0.0.2", 00:35:39.652 "adrfam": "ipv4", 00:35:39.652 "trsvcid": "4420", 00:35:39.652 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:39.652 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:39.652 "hdgst": false, 00:35:39.652 "ddgst": false 00:35:39.652 }, 00:35:39.652 "method": "bdev_nvme_attach_controller" 00:35:39.652 }' 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:39.652 20:23:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.652 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:39.652 ... 00:35:39.652 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:39.652 ... 00:35:39.652 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:39.652 ... 00:35:39.652 fio-3.35 00:35:39.652 Starting 24 threads 00:35:39.652 EAL: No free 2048 kB hugepages reported on node 1 00:35:51.853 00:35:51.853 filename0: (groupid=0, jobs=1): err= 0: pid=3366195: Sat Jul 13 20:23:37 2024 00:35:51.853 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.9MiB/10020msec) 00:35:51.853 slat (usec): min=8, max=110, avg=41.19, stdev=15.45 00:35:51.853 clat (usec): min=19641, max=55810, avg=32917.44, stdev=2373.56 00:35:51.853 lat (usec): min=19674, max=55848, avg=32958.63, stdev=2374.20 00:35:51.853 clat percentiles (usec): 00:35:51.853 | 1.00th=[22414], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:51.853 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.853 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.853 | 99.00th=[40109], 99.50th=[49546], 99.90th=[55837], 99.95th=[55837], 00:35:51.853 | 99.99th=[55837] 00:35:51.853 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1918.32, stdev=59.60, samples=19 00:35:51.853 iops : min= 448, max= 512, avg=479.58, stdev=14.90, samples=19 00:35:51.853 lat (msec) : 20=0.35%, 50=99.28%, 100=0.37% 00:35:51.853 cpu : usr=97.27%, sys=2.11%, ctx=32, majf=0, minf=62 00:35:51.853 IO depths : 1=4.4%, 2=8.9%, 4=18.0%, 8=59.0%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:51.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.853 complete : 0=0.0%, 4=92.7%, 8=3.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.853 issued rwts: total=4828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.853 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.853 filename0: (groupid=0, jobs=1): err= 0: pid=3366196: Sat Jul 13 20:23:37 2024 00:35:51.853 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10014msec) 00:35:51.853 slat (nsec): min=8038, max=55215, avg=18866.79, stdev=10458.33 00:35:51.853 clat (usec): min=19914, max=72454, avg=33214.29, stdev=2516.46 00:35:51.853 lat (usec): min=19923, max=72494, avg=33233.16, stdev=2515.94 00:35:51.853 clat percentiles (usec): 00:35:51.853 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:51.853 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:35:51.853 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:51.853 | 99.00th=[37487], 99.50th=[40109], 99.90th=[72877], 99.95th=[72877], 00:35:51.853 | 99.99th=[72877] 00:35:51.853 bw ( KiB/s): min= 1664, max= 2032, per=4.14%, avg=1913.26, stdev=65.61, samples=19 00:35:51.853 iops : min= 416, max= 508, avg=478.32, stdev=16.40, samples=19 00:35:51.853 lat (msec) : 20=0.04%, 50=99.63%, 100=0.33% 00:35:51.853 cpu : usr=98.04%, sys=1.58%, ctx=8, majf=0, minf=31 00:35:51.853 IO depths : 1=1.8%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:35:51.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.853 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=3366197: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.8MiB/10005msec) 00:35:51.854 slat (usec): min=8, max=118, avg=62.27, stdev=21.31 00:35:51.854 clat (usec): min=6228, max=93056, avg=32979.48, stdev=3733.85 00:35:51.854 lat (usec): min=6237, max=93088, avg=33041.75, stdev=3733.11 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[23200], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:51.854 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.854 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.854 | 99.00th=[40633], 99.50th=[46924], 99.90th=[76022], 99.95th=[92799], 00:35:51.854 | 99.99th=[92799] 00:35:51.854 bw ( KiB/s): min= 1664, max= 2000, per=4.15%, avg=1917.47, stdev=70.20, samples=19 00:35:51.854 iops : min= 416, max= 500, avg=479.37, stdev=17.55, samples=19 00:35:51.854 lat (msec) : 10=0.29%, 20=0.68%, 50=98.53%, 100=0.50% 00:35:51.854 cpu : usr=98.08%, sys=1.48%, ctx=9, majf=0, minf=45 00:35:51.854 IO depths : 1=0.1%, 2=0.3%, 4=1.5%, 8=80.1%, 16=18.2%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=89.6%, 8=9.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=3366198: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=481, BW=1925KiB/s (1972kB/s)(18.8MiB/10005msec) 00:35:51.854 slat (usec): min=7, max=135, avg=65.52, stdev=19.07 00:35:51.854 clat (usec): min=16686, max=46914, avg=32684.87, stdev=1424.37 00:35:51.854 lat (usec): min=16737, max=46991, avg=32750.39, stdev=1421.71 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[30016], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:35:51.854 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32637], 60.00th=[32637], 00:35:51.854 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:51.854 | 99.00th=[38536], 99.50th=[39060], 99.90th=[41157], 99.95th=[41681], 00:35:51.854 | 99.99th=[46924] 00:35:51.854 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=85.33, samples=19 00:35:51.854 iops : min= 448, max= 512, avg=480.00, stdev=21.33, samples=19 00:35:51.854 lat (msec) : 20=0.08%, 50=99.92% 00:35:51.854 cpu : usr=98.08%, sys=1.46%, ctx=18, majf=0, minf=28 00:35:51.854 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=3366199: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=485, BW=1941KiB/s (1987kB/s)(19.0MiB/10025msec) 00:35:51.854 slat (usec): min=5, max=103, avg=21.99, stdev=12.37 00:35:51.854 clat (usec): min=5213, max=39503, avg=32791.75, stdev=2647.64 00:35:51.854 lat (usec): min=5221, max=39524, avg=32813.74, stdev=2647.84 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[20317], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:51.854 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:51.854 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.854 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:35:51.854 | 99.99th=[39584] 00:35:51.854 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=1939.20, stdev=62.64, samples=20 00:35:51.854 iops : min= 480, max= 544, avg=484.80, stdev=15.66, samples=20 00:35:51.854 lat (msec) : 10=0.66%, 20=0.33%, 50=99.01% 00:35:51.854 cpu : usr=97.91%, sys=1.61%, ctx=29, majf=0, minf=38 00:35:51.854 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=3366200: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10010msec) 00:35:51.854 slat (usec): min=8, max=106, avg=39.42, stdev=17.68 00:35:51.854 clat (usec): min=13999, max=65821, avg=32918.28, stdev=3026.40 00:35:51.854 lat (usec): min=14008, max=65850, avg=32957.70, stdev=3026.38 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[21103], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:51.854 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.854 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.854 | 99.00th=[41681], 99.50th=[56361], 99.90th=[65799], 99.95th=[65799], 00:35:51.854 | 99.99th=[65799] 00:35:51.854 bw ( KiB/s): min= 1763, max= 2032, per=4.16%, avg=1920.16, stdev=47.45, samples=19 00:35:51.854 iops : min= 440, max= 508, avg=480.00, stdev=12.00, samples=19 00:35:51.854 lat (msec) : 20=0.60%, 50=98.78%, 100=0.62% 00:35:51.854 cpu : usr=98.08%, sys=1.51%, ctx=28, majf=0, minf=36 00:35:51.854 IO depths : 1=1.4%, 2=7.4%, 4=24.1%, 8=56.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=3366201: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=481, BW=1926KiB/s (1973kB/s)(18.9MiB/10033msec) 00:35:51.854 slat (usec): min=7, max=101, avg=29.13, stdev=19.68 00:35:51.854 clat (usec): min=16911, max=51278, avg=32958.46, stdev=1433.96 00:35:51.854 lat (usec): min=16935, max=51346, avg=32987.59, stdev=1434.64 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[30540], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:51.854 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:51.854 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:51.854 | 99.00th=[36963], 99.50th=[38536], 99.90th=[47449], 99.95th=[48497], 00:35:51.854 | 99.99th=[51119] 00:35:51.854 bw ( KiB/s): min= 1904, max= 2048, per=4.17%, avg=1926.40, stdev=29.09, samples=20 00:35:51.854 iops : min= 476, max= 512, avg=481.60, stdev= 7.27, samples=20 00:35:51.854 lat (msec) : 20=0.46%, 50=99.50%, 100=0.04% 00:35:51.854 cpu : usr=97.71%, sys=1.68%, ctx=105, majf=0, minf=33 00:35:51.854 IO depths : 1=4.4%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename0: (groupid=0, jobs=1): err= 0: pid=3366202: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=479, BW=1919KiB/s (1965kB/s)(18.8MiB/10004msec) 00:35:51.854 slat (nsec): min=8721, max=85269, avg=36829.21, stdev=13866.94 00:35:51.854 clat (usec): min=18192, max=76562, avg=33000.46, stdev=2738.92 00:35:51.854 lat (usec): min=18227, max=76598, avg=33037.29, stdev=2738.91 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:51.854 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:51.854 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:51.854 | 99.00th=[37487], 99.50th=[39060], 99.90th=[76022], 99.95th=[76022], 00:35:51.854 | 99.99th=[77071] 00:35:51.854 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1913.26, stdev=67.11, samples=19 00:35:51.854 iops : min= 416, max= 512, avg=478.32, stdev=16.78, samples=19 00:35:51.854 lat (msec) : 20=0.15%, 50=99.52%, 100=0.33% 00:35:51.854 cpu : usr=97.97%, sys=1.63%, ctx=36, majf=0, minf=38 00:35:51.854 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename1: (groupid=0, jobs=1): err= 0: pid=3366203: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10020msec) 00:35:51.854 slat (usec): min=8, max=111, avg=44.63, stdev=17.86 00:35:51.854 clat (usec): min=18972, max=55812, avg=32800.55, stdev=1590.81 00:35:51.854 lat (usec): min=19023, max=55873, avg=32845.18, stdev=1591.04 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[26870], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:51.854 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:51.854 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:51.854 | 99.00th=[38536], 99.50th=[39060], 99.90th=[41681], 99.95th=[55837], 00:35:51.854 | 99.99th=[55837] 00:35:51.854 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=68.30, samples=19 00:35:51.854 iops : min= 448, max= 512, avg=480.00, stdev=17.07, samples=19 00:35:51.854 lat (msec) : 20=0.19%, 50=99.73%, 100=0.08% 00:35:51.854 cpu : usr=98.04%, sys=1.54%, ctx=14, majf=0, minf=35 00:35:51.854 IO depths : 1=5.6%, 2=11.3%, 4=24.4%, 8=51.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.854 filename1: (groupid=0, jobs=1): err= 0: pid=3366204: Sat Jul 13 20:23:37 2024 00:35:51.854 read: IOPS=481, BW=1924KiB/s (1970kB/s)(18.8MiB/10012msec) 00:35:51.854 slat (usec): min=8, max=111, avg=34.08, stdev=20.35 00:35:51.854 clat (usec): min=11232, max=67239, avg=32950.69, stdev=2505.77 00:35:51.854 lat (usec): min=11255, max=67280, avg=32984.77, stdev=2505.19 00:35:51.854 clat percentiles (usec): 00:35:51.854 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:51.854 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.854 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:51.854 | 99.00th=[37487], 99.50th=[40109], 99.90th=[67634], 99.95th=[67634], 00:35:51.854 | 99.99th=[67634] 00:35:51.854 bw ( KiB/s): min= 1660, max= 2048, per=4.14%, avg=1913.05, stdev=80.22, samples=19 00:35:51.854 iops : min= 415, max= 512, avg=478.26, stdev=20.06, samples=19 00:35:51.854 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:35:51.854 cpu : usr=97.19%, sys=1.94%, ctx=149, majf=0, minf=27 00:35:51.854 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:51.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.854 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename1: (groupid=0, jobs=1): err= 0: pid=3366205: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10021msec) 00:35:51.855 slat (nsec): min=7239, max=78415, avg=31082.75, stdev=13141.77 00:35:51.855 clat (usec): min=20193, max=41851, avg=32913.62, stdev=1309.43 00:35:51.855 lat (usec): min=20217, max=41888, avg=32944.70, stdev=1308.59 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.855 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.855 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39584], 99.95th=[41681], 00:35:51.855 | 99.99th=[41681] 00:35:51.855 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1926.55, stdev=28.59, samples=20 00:35:51.855 iops : min= 480, max= 512, avg=481.60, stdev= 7.16, samples=20 00:35:51.855 lat (msec) : 50=100.00% 00:35:51.855 cpu : usr=93.76%, sys=3.44%, ctx=152, majf=0, minf=47 00:35:51.855 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename1: (groupid=0, jobs=1): err= 0: pid=3366206: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=481, BW=1925KiB/s (1971kB/s)(18.8MiB/10005msec) 00:35:51.855 slat (usec): min=8, max=115, avg=35.33, stdev=22.68 00:35:51.855 clat (usec): min=4623, max=59679, avg=32949.24, stdev=2768.58 00:35:51.855 lat (usec): min=4632, max=59712, avg=32984.57, stdev=2767.87 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[25297], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.855 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34866], 00:35:51.855 | 99.00th=[40109], 99.50th=[47973], 99.90th=[59507], 99.95th=[59507], 00:35:51.855 | 99.99th=[59507] 00:35:51.855 bw ( KiB/s): min= 1776, max= 2048, per=4.15%, avg=1918.32, stdev=57.42, samples=19 00:35:51.855 iops : min= 444, max= 512, avg=479.58, stdev=14.35, samples=19 00:35:51.855 lat (msec) : 10=0.04%, 20=0.79%, 50=98.80%, 100=0.37% 00:35:51.855 cpu : usr=98.26%, sys=1.33%, ctx=17, majf=0, minf=43 00:35:51.855 IO depths : 1=3.0%, 2=9.2%, 4=24.9%, 8=53.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename1: (groupid=0, jobs=1): err= 0: pid=3366207: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10017msec) 00:35:51.855 slat (usec): min=6, max=113, avg=43.00, stdev=19.80 00:35:51.855 clat (usec): min=17574, max=39453, avg=32805.03, stdev=1378.55 00:35:51.855 lat (usec): min=17582, max=39477, avg=32848.03, stdev=1378.22 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:51.855 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[33817], 00:35:51.855 | 99.00th=[36439], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:35:51.855 | 99.99th=[39584] 00:35:51.855 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1926.74, stdev=29.37, samples=19 00:35:51.855 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:35:51.855 lat (msec) : 20=0.35%, 50=99.65% 00:35:51.855 cpu : usr=97.92%, sys=1.60%, ctx=44, majf=0, minf=47 00:35:51.855 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename1: (groupid=0, jobs=1): err= 0: pid=3366208: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=480, BW=1924KiB/s (1970kB/s)(18.8MiB/10014msec) 00:35:51.855 slat (nsec): min=8885, max=67390, avg=27198.98, stdev=9223.14 00:35:51.855 clat (usec): min=15921, max=55941, avg=33012.88, stdev=1361.53 00:35:51.855 lat (usec): min=15966, max=55956, avg=33040.08, stdev=1360.26 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.855 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.855 | 99.00th=[37487], 99.50th=[39060], 99.90th=[50070], 99.95th=[50070], 00:35:51.855 | 99.99th=[55837] 00:35:51.855 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=73.90, samples=19 00:35:51.855 iops : min= 448, max= 512, avg=480.00, stdev=18.48, samples=19 00:35:51.855 lat (msec) : 20=0.17%, 50=99.71%, 100=0.12% 00:35:51.855 cpu : usr=90.95%, sys=4.36%, ctx=149, majf=0, minf=38 00:35:51.855 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename1: (groupid=0, jobs=1): err= 0: pid=3366209: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=484, BW=1937KiB/s (1983kB/s)(18.9MiB/10013msec) 00:35:51.855 slat (nsec): min=8122, max=99547, avg=20028.88, stdev=11582.01 00:35:51.855 clat (usec): min=12026, max=41540, avg=32878.30, stdev=2030.26 00:35:51.855 lat (usec): min=12039, max=41575, avg=32898.33, stdev=2030.01 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[24511], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:51.855 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:51.855 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:51.855 | 99.00th=[35914], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:35:51.855 | 99.99th=[41681] 00:35:51.855 bw ( KiB/s): min= 1920, max= 2052, per=4.19%, avg=1933.00, stdev=40.02, samples=20 00:35:51.855 iops : min= 480, max= 513, avg=483.25, stdev=10.00, samples=20 00:35:51.855 lat (msec) : 20=0.66%, 50=99.34% 00:35:51.855 cpu : usr=98.22%, sys=1.35%, ctx=21, majf=0, minf=43 00:35:51.855 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename1: (groupid=0, jobs=1): err= 0: pid=3366210: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=484, BW=1939KiB/s (1985kB/s)(19.0MiB/10010msec) 00:35:51.855 slat (usec): min=6, max=102, avg=36.27, stdev=11.20 00:35:51.855 clat (usec): min=14899, max=78729, avg=32677.68, stdev=3188.40 00:35:51.855 lat (usec): min=14908, max=78750, avg=32713.96, stdev=3187.85 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[20317], 5.00th=[31851], 10.00th=[32375], 20.00th=[32375], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:51.855 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:51.855 | 99.00th=[39060], 99.50th=[46924], 99.90th=[65799], 99.95th=[78119], 00:35:51.855 | 99.99th=[79168] 00:35:51.855 bw ( KiB/s): min= 1856, max= 2144, per=4.19%, avg=1935.16, stdev=60.69, samples=19 00:35:51.855 iops : min= 464, max= 536, avg=483.79, stdev=15.17, samples=19 00:35:51.855 lat (msec) : 20=0.91%, 50=98.76%, 100=0.33% 00:35:51.855 cpu : usr=98.16%, sys=1.43%, ctx=48, majf=0, minf=26 00:35:51.855 IO depths : 1=5.8%, 2=11.5%, 4=23.5%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename2: (groupid=0, jobs=1): err= 0: pid=3366211: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10020msec) 00:35:51.855 slat (nsec): min=8647, max=86364, avg=38001.53, stdev=11523.38 00:35:51.855 clat (usec): min=19615, max=55824, avg=32945.75, stdev=1850.13 00:35:51.855 lat (usec): min=19638, max=55864, avg=32983.75, stdev=1850.23 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32637], 60.00th=[32900], 00:35:51.855 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33424], 95.00th=[34341], 00:35:51.855 | 99.00th=[39060], 99.50th=[39584], 99.90th=[55837], 99.95th=[55837], 00:35:51.855 | 99.99th=[55837] 00:35:51.855 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1913.26, stdev=79.70, samples=19 00:35:51.855 iops : min= 448, max= 512, avg=478.32, stdev=19.93, samples=19 00:35:51.855 lat (msec) : 20=0.23%, 50=99.44%, 100=0.33% 00:35:51.855 cpu : usr=87.41%, sys=5.63%, ctx=215, majf=0, minf=30 00:35:51.855 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename2: (groupid=0, jobs=1): err= 0: pid=3366212: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=481, BW=1927KiB/s (1974kB/s)(18.9MiB/10028msec) 00:35:51.855 slat (usec): min=5, max=104, avg=35.05, stdev=17.19 00:35:51.855 clat (usec): min=18396, max=51778, avg=32905.15, stdev=1686.06 00:35:51.855 lat (usec): min=18423, max=51819, avg=32940.21, stdev=1686.87 00:35:51.855 clat percentiles (usec): 00:35:51.855 | 1.00th=[30016], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:51.855 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.855 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.855 | 99.00th=[38536], 99.50th=[39584], 99.90th=[46924], 99.95th=[46924], 00:35:51.855 | 99.99th=[51643] 00:35:51.855 bw ( KiB/s): min= 1904, max= 2048, per=4.17%, avg=1926.40, stdev=29.09, samples=20 00:35:51.855 iops : min= 476, max= 512, avg=481.60, stdev= 7.27, samples=20 00:35:51.855 lat (msec) : 20=0.29%, 50=99.67%, 100=0.04% 00:35:51.855 cpu : usr=94.87%, sys=2.86%, ctx=81, majf=0, minf=43 00:35:51.855 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:51.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.855 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.855 filename2: (groupid=0, jobs=1): err= 0: pid=3366213: Sat Jul 13 20:23:37 2024 00:35:51.855 read: IOPS=481, BW=1925KiB/s (1972kB/s)(18.8MiB/10005msec) 00:35:51.855 slat (nsec): min=8205, max=87136, avg=30315.10, stdev=10540.92 00:35:51.855 clat (usec): min=17977, max=55294, avg=32989.35, stdev=1260.09 00:35:51.856 lat (usec): min=17992, max=55314, avg=33019.66, stdev=1259.95 00:35:51.856 clat percentiles (usec): 00:35:51.856 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:51.856 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.856 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.856 | 99.00th=[36963], 99.50th=[38536], 99.90th=[47973], 99.95th=[47973], 00:35:51.856 | 99.99th=[55313] 00:35:51.856 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1920.00, stdev=74.09, samples=19 00:35:51.856 iops : min= 448, max= 512, avg=480.00, stdev=18.52, samples=19 00:35:51.856 lat (msec) : 20=0.12%, 50=99.83%, 100=0.04% 00:35:51.856 cpu : usr=97.98%, sys=1.62%, ctx=31, majf=0, minf=29 00:35:51.856 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.856 filename2: (groupid=0, jobs=1): err= 0: pid=3366214: Sat Jul 13 20:23:37 2024 00:35:51.856 read: IOPS=481, BW=1927KiB/s (1973kB/s)(18.8MiB/10005msec) 00:35:51.856 slat (usec): min=6, max=122, avg=35.20, stdev=23.59 00:35:51.856 clat (usec): min=15689, max=74970, avg=33010.37, stdev=2569.68 00:35:51.856 lat (usec): min=15776, max=74988, avg=33045.56, stdev=2567.12 00:35:51.856 clat percentiles (usec): 00:35:51.856 | 1.00th=[25035], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:35:51.856 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:51.856 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:51.856 | 99.00th=[39060], 99.50th=[42730], 99.90th=[62653], 99.95th=[63177], 00:35:51.856 | 99.99th=[74974] 00:35:51.856 bw ( KiB/s): min= 1772, max= 2080, per=4.16%, avg=1921.47, stdev=60.39, samples=19 00:35:51.856 iops : min= 443, max= 520, avg=480.37, stdev=15.10, samples=19 00:35:51.856 lat (msec) : 20=0.83%, 50=98.80%, 100=0.37% 00:35:51.856 cpu : usr=98.16%, sys=1.43%, ctx=13, majf=0, minf=50 00:35:51.856 IO depths : 1=1.4%, 2=4.5%, 4=12.6%, 8=67.4%, 16=14.1%, 32=0.0%, >=64=0.0% 00:35:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 complete : 0=0.0%, 4=91.8%, 8=5.4%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 issued rwts: total=4820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.856 filename2: (groupid=0, jobs=1): err= 0: pid=3366215: Sat Jul 13 20:23:37 2024 00:35:51.856 read: IOPS=478, BW=1916KiB/s (1962kB/s)(18.7MiB/10014msec) 00:35:51.856 slat (nsec): min=8103, max=78072, avg=29469.63, stdev=12858.10 00:35:51.856 clat (usec): min=17553, max=85312, avg=33176.18, stdev=3013.57 00:35:51.856 lat (usec): min=17565, max=85336, avg=33205.65, stdev=3012.85 00:35:51.856 clat percentiles (usec): 00:35:51.856 | 1.00th=[27395], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:51.856 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:51.856 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:35:51.856 | 99.00th=[40633], 99.50th=[47449], 99.90th=[71828], 99.95th=[85459], 00:35:51.856 | 99.99th=[85459] 00:35:51.856 bw ( KiB/s): min= 1664, max= 2032, per=4.14%, avg=1911.58, stdev=65.83, samples=19 00:35:51.856 iops : min= 416, max= 508, avg=477.89, stdev=16.46, samples=19 00:35:51.856 lat (msec) : 20=0.17%, 50=99.50%, 100=0.33% 00:35:51.856 cpu : usr=93.91%, sys=3.21%, ctx=219, majf=0, minf=39 00:35:51.856 IO depths : 1=1.6%, 2=7.5%, 4=24.2%, 8=55.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:35:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 complete : 0=0.0%, 4=94.2%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 issued rwts: total=4796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.856 filename2: (groupid=0, jobs=1): err= 0: pid=3366216: Sat Jul 13 20:23:37 2024 00:35:51.856 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10017msec) 00:35:51.856 slat (nsec): min=8771, max=78429, avg=33029.28, stdev=12472.74 00:35:51.856 clat (usec): min=17368, max=39492, avg=32904.51, stdev=1385.18 00:35:51.856 lat (usec): min=17387, max=39517, avg=32937.54, stdev=1384.63 00:35:51.856 clat percentiles (usec): 00:35:51.856 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:51.856 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.856 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.856 | 99.00th=[37487], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:35:51.856 | 99.99th=[39584] 00:35:51.856 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1926.74, stdev=29.37, samples=19 00:35:51.856 iops : min= 480, max= 512, avg=481.68, stdev= 7.34, samples=19 00:35:51.856 lat (msec) : 20=0.33%, 50=99.67% 00:35:51.856 cpu : usr=89.22%, sys=5.10%, ctx=428, majf=0, minf=34 00:35:51.856 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.856 filename2: (groupid=0, jobs=1): err= 0: pid=3366217: Sat Jul 13 20:23:37 2024 00:35:51.856 read: IOPS=487, BW=1949KiB/s (1996kB/s)(19.1MiB/10016msec) 00:35:51.856 slat (usec): min=5, max=112, avg=20.65, stdev=18.90 00:35:51.856 clat (usec): min=2887, max=40299, avg=32656.89, stdev=3230.07 00:35:51.856 lat (usec): min=2908, max=40315, avg=32677.53, stdev=3230.11 00:35:51.856 clat percentiles (usec): 00:35:51.856 | 1.00th=[13435], 5.00th=[32375], 10.00th=[32637], 20.00th=[32637], 00:35:51.856 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:35:51.856 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.856 | 99.00th=[35914], 99.50th=[38011], 99.90th=[40109], 99.95th=[40109], 00:35:51.856 | 99.99th=[40109] 00:35:51.856 bw ( KiB/s): min= 1920, max= 2304, per=4.21%, avg=1945.60, stdev=89.07, samples=20 00:35:51.856 iops : min= 480, max= 576, avg=486.40, stdev=22.27, samples=20 00:35:51.856 lat (msec) : 4=0.33%, 10=0.66%, 20=0.37%, 50=98.65% 00:35:51.856 cpu : usr=98.13%, sys=1.47%, ctx=20, majf=0, minf=40 00:35:51.856 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.856 filename2: (groupid=0, jobs=1): err= 0: pid=3366218: Sat Jul 13 20:23:37 2024 00:35:51.856 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10004msec) 00:35:51.856 slat (usec): min=8, max=102, avg=37.26, stdev=14.77 00:35:51.856 clat (usec): min=11518, max=67393, avg=33028.69, stdev=2714.63 00:35:51.856 lat (usec): min=11530, max=67425, avg=33065.95, stdev=2714.71 00:35:51.856 clat percentiles (usec): 00:35:51.856 | 1.00th=[29754], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:35:51.856 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:35:51.856 | 70.00th=[33162], 80.00th=[33162], 90.00th=[33817], 95.00th=[34341], 00:35:51.856 | 99.00th=[39060], 99.50th=[49546], 99.90th=[67634], 99.95th=[67634], 00:35:51.856 | 99.99th=[67634] 00:35:51.856 bw ( KiB/s): min= 1664, max= 2048, per=4.15%, avg=1915.79, stdev=68.27, samples=19 00:35:51.856 iops : min= 416, max= 512, avg=478.95, stdev=17.07, samples=19 00:35:51.856 lat (msec) : 20=0.67%, 50=98.83%, 100=0.50% 00:35:51.856 cpu : usr=96.61%, sys=2.22%, ctx=62, majf=0, minf=39 00:35:51.856 IO depths : 1=4.5%, 2=9.4%, 4=19.9%, 8=57.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:51.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 complete : 0=0.0%, 4=93.1%, 8=2.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.856 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:51.856 00:35:51.856 Run status group 0 (all jobs): 00:35:51.856 READ: bw=45.1MiB/s (47.3MB/s), 1916KiB/s-1949KiB/s (1962kB/s-1996kB/s), io=452MiB (474MB), run=10004-10033msec 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.856 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 bdev_null0 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 [2024-07-13 20:23:38.172700] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 bdev_null1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.857 { 00:35:51.857 "params": { 00:35:51.857 "name": "Nvme$subsystem", 00:35:51.857 "trtype": "$TEST_TRANSPORT", 00:35:51.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.857 "adrfam": "ipv4", 00:35:51.857 "trsvcid": "$NVMF_PORT", 00:35:51.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.857 "hdgst": ${hdgst:-false}, 00:35:51.857 "ddgst": ${ddgst:-false} 00:35:51.857 }, 00:35:51.857 "method": "bdev_nvme_attach_controller" 00:35:51.857 } 00:35:51.857 EOF 00:35:51.857 )") 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:51.857 { 00:35:51.857 "params": { 00:35:51.857 "name": "Nvme$subsystem", 00:35:51.857 "trtype": "$TEST_TRANSPORT", 00:35:51.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:51.857 "adrfam": "ipv4", 00:35:51.857 "trsvcid": "$NVMF_PORT", 00:35:51.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:51.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:51.857 "hdgst": ${hdgst:-false}, 00:35:51.857 "ddgst": ${ddgst:-false} 00:35:51.857 }, 00:35:51.857 "method": "bdev_nvme_attach_controller" 00:35:51.857 } 00:35:51.857 EOF 00:35:51.857 )") 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:51.857 "params": { 00:35:51.857 "name": "Nvme0", 00:35:51.857 "trtype": "tcp", 00:35:51.857 "traddr": "10.0.0.2", 00:35:51.857 "adrfam": "ipv4", 00:35:51.857 "trsvcid": "4420", 00:35:51.857 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.857 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.857 "hdgst": false, 00:35:51.857 "ddgst": false 00:35:51.857 }, 00:35:51.857 "method": "bdev_nvme_attach_controller" 00:35:51.857 },{ 00:35:51.857 "params": { 00:35:51.857 "name": "Nvme1", 00:35:51.857 "trtype": "tcp", 00:35:51.857 "traddr": "10.0.0.2", 00:35:51.857 "adrfam": "ipv4", 00:35:51.857 "trsvcid": "4420", 00:35:51.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:51.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:51.857 "hdgst": false, 00:35:51.857 "ddgst": false 00:35:51.857 }, 00:35:51.857 "method": "bdev_nvme_attach_controller" 00:35:51.857 }' 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:51.857 20:23:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:51.857 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:51.857 ... 00:35:51.857 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:51.857 ... 00:35:51.857 fio-3.35 00:35:51.857 Starting 4 threads 00:35:51.857 EAL: No free 2048 kB hugepages reported on node 1 00:35:57.121 00:35:57.121 filename0: (groupid=0, jobs=1): err= 0: pid=3367524: Sat Jul 13 20:23:44 2024 00:35:57.121 read: IOPS=1742, BW=13.6MiB/s (14.3MB/s)(68.1MiB/5003msec) 00:35:57.121 slat (nsec): min=3907, max=88521, avg=13451.74, stdev=7466.26 00:35:57.121 clat (usec): min=2292, max=52890, avg=4549.31, stdev=1663.65 00:35:57.121 lat (usec): min=2348, max=52918, avg=4562.76, stdev=1663.57 00:35:57.121 clat percentiles (usec): 00:35:57.121 | 1.00th=[ 3163], 5.00th=[ 3687], 10.00th=[ 3851], 20.00th=[ 4015], 00:35:57.121 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:35:57.121 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 6128], 95.00th=[ 6325], 00:35:57.121 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 9634], 99.95th=[52691], 00:35:57.121 | 99.99th=[52691] 00:35:57.121 bw ( KiB/s): min=12976, max=14160, per=24.85%, avg=13944.00, stdev=348.28, samples=10 00:35:57.121 iops : min= 1622, max= 1770, avg=1743.00, stdev=43.54, samples=10 00:35:57.121 lat (msec) : 4=18.11%, 10=81.80%, 100=0.09% 00:35:57.121 cpu : usr=95.02%, sys=4.50%, ctx=10, majf=0, minf=9 00:35:57.121 IO depths : 1=0.2%, 2=2.6%, 4=69.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.121 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.121 issued rwts: total=8718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.121 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.121 filename0: (groupid=0, jobs=1): err= 0: pid=3367525: Sat Jul 13 20:23:44 2024 00:35:57.121 read: IOPS=1745, BW=13.6MiB/s (14.3MB/s)(68.2MiB/5001msec) 00:35:57.121 slat (nsec): min=3933, max=53328, avg=14754.08, stdev=7164.96 00:35:57.121 clat (usec): min=1704, max=7733, avg=4538.81, stdev=785.49 00:35:57.121 lat (usec): min=1713, max=7741, avg=4553.57, stdev=784.69 00:35:57.121 clat percentiles (usec): 00:35:57.121 | 1.00th=[ 3195], 5.00th=[ 3654], 10.00th=[ 3785], 20.00th=[ 4047], 00:35:57.121 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:57.121 | 70.00th=[ 4555], 80.00th=[ 4948], 90.00th=[ 5997], 95.00th=[ 6259], 00:35:57.121 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 7439], 00:35:57.121 | 99.99th=[ 7767] 00:35:57.121 bw ( KiB/s): min=13408, max=14576, per=24.84%, avg=13937.78, stdev=345.99, samples=9 00:35:57.121 iops : min= 1676, max= 1822, avg=1742.22, stdev=43.25, samples=9 00:35:57.121 lat (msec) : 2=0.05%, 4=18.16%, 10=81.79% 00:35:57.121 cpu : usr=95.48%, sys=3.92%, ctx=14, majf=0, minf=9 00:35:57.121 IO depths : 1=0.1%, 2=2.0%, 4=69.6%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.122 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.122 issued rwts: total=8728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.122 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.122 filename1: (groupid=0, jobs=1): err= 0: pid=3367526: Sat Jul 13 20:23:44 2024 00:35:57.122 read: IOPS=1762, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5002msec) 00:35:57.122 slat (usec): min=3, max=101, avg=14.87, stdev= 7.72 00:35:57.122 clat (usec): min=2342, max=8295, avg=4496.03, stdev=687.72 00:35:57.122 lat (usec): min=2390, max=8305, avg=4510.90, stdev=686.73 00:35:57.122 clat percentiles (usec): 00:35:57.122 | 1.00th=[ 3458], 5.00th=[ 3851], 10.00th=[ 3949], 20.00th=[ 4047], 00:35:57.122 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:35:57.122 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 5669], 95.00th=[ 6128], 00:35:57.122 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 8094], 99.95th=[ 8160], 00:35:57.122 | 99.99th=[ 8291] 00:35:57.122 bw ( KiB/s): min=13216, max=14464, per=25.12%, avg=14094.22, stdev=404.09, samples=9 00:35:57.122 iops : min= 1652, max= 1808, avg=1761.78, stdev=50.51, samples=9 00:35:57.122 lat (msec) : 4=14.33%, 10=85.67% 00:35:57.122 cpu : usr=95.06%, sys=3.94%, ctx=64, majf=0, minf=9 00:35:57.122 IO depths : 1=0.1%, 2=2.0%, 4=67.5%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.122 complete : 0=0.0%, 4=95.0%, 8=5.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.122 issued rwts: total=8816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.122 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.122 filename1: (groupid=0, jobs=1): err= 0: pid=3367528: Sat Jul 13 20:23:44 2024 00:35:57.122 read: IOPS=1764, BW=13.8MiB/s (14.5MB/s)(69.0MiB/5003msec) 00:35:57.122 slat (nsec): min=4222, max=62222, avg=13189.00, stdev=7017.41 00:35:57.122 clat (usec): min=1763, max=8223, avg=4493.26, stdev=813.35 00:35:57.122 lat (usec): min=1776, max=8260, avg=4506.45, stdev=813.08 00:35:57.122 clat percentiles (usec): 00:35:57.122 | 1.00th=[ 3064], 5.00th=[ 3556], 10.00th=[ 3785], 20.00th=[ 3949], 00:35:57.122 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4359], 60.00th=[ 4424], 00:35:57.122 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 6063], 95.00th=[ 6325], 00:35:57.122 | 99.00th=[ 6718], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7701], 00:35:57.122 | 99.99th=[ 8225] 00:35:57.122 bw ( KiB/s): min=13824, max=14928, per=25.16%, avg=14116.80, stdev=316.34, samples=10 00:35:57.122 iops : min= 1728, max= 1866, avg=1764.60, stdev=39.54, samples=10 00:35:57.122 lat (msec) : 2=0.02%, 4=23.33%, 10=76.65% 00:35:57.122 cpu : usr=95.62%, sys=3.92%, ctx=15, majf=0, minf=2 00:35:57.122 IO depths : 1=0.2%, 2=2.7%, 4=69.5%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:57.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.122 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:57.122 issued rwts: total=8826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:57.122 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:57.122 00:35:57.122 Run status group 0 (all jobs): 00:35:57.122 READ: bw=54.8MiB/s (57.5MB/s), 13.6MiB/s-13.8MiB/s (14.3MB/s-14.5MB/s), io=274MiB (287MB), run=5001-5003msec 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 00:35:57.122 real 0m24.192s 00:35:57.122 user 4m29.317s 00:35:57.122 sys 0m8.117s 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 ************************************ 00:35:57.122 END TEST fio_dif_rand_params 00:35:57.122 ************************************ 00:35:57.122 20:23:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:57.122 20:23:44 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:35:57.122 20:23:44 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 ************************************ 00:35:57.122 START TEST fio_dif_digest 00:35:57.122 ************************************ 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 bdev_null0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:57.122 [2024-07-13 20:23:44.558595] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:57.122 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:57.122 { 00:35:57.122 "params": { 00:35:57.122 "name": "Nvme$subsystem", 00:35:57.122 "trtype": "$TEST_TRANSPORT", 00:35:57.122 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:57.122 "adrfam": "ipv4", 00:35:57.122 "trsvcid": "$NVMF_PORT", 00:35:57.122 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:57.122 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:57.122 "hdgst": ${hdgst:-false}, 00:35:57.122 "ddgst": ${ddgst:-false} 00:35:57.122 }, 00:35:57.122 "method": "bdev_nvme_attach_controller" 00:35:57.122 } 00:35:57.122 EOF 00:35:57.123 )") 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:57.123 "params": { 00:35:57.123 "name": "Nvme0", 00:35:57.123 "trtype": "tcp", 00:35:57.123 "traddr": "10.0.0.2", 00:35:57.123 "adrfam": "ipv4", 00:35:57.123 "trsvcid": "4420", 00:35:57.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:57.123 "hdgst": true, 00:35:57.123 "ddgst": true 00:35:57.123 }, 00:35:57.123 "method": "bdev_nvme_attach_controller" 00:35:57.123 }' 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:57.123 20:23:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:57.380 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:57.380 ... 00:35:57.380 fio-3.35 00:35:57.380 Starting 3 threads 00:35:57.380 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.572 00:36:09.572 filename0: (groupid=0, jobs=1): err= 0: pid=3368359: Sat Jul 13 20:23:55 2024 00:36:09.572 read: IOPS=217, BW=27.2MiB/s (28.5MB/s)(272MiB/10005msec) 00:36:09.572 slat (usec): min=4, max=167, avg=16.86, stdev= 5.62 00:36:09.572 clat (usec): min=5074, max=56254, avg=13764.60, stdev=4839.26 00:36:09.572 lat (usec): min=5089, max=56273, avg=13781.46, stdev=4839.47 00:36:09.572 clat percentiles (usec): 00:36:09.572 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[12256], 00:36:09.572 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:36:09.572 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:36:09.572 | 99.00th=[53740], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:36:09.572 | 99.99th=[56361] 00:36:09.572 bw ( KiB/s): min=25088, max=30720, per=35.66%, avg=27840.00, stdev=1644.19, samples=20 00:36:09.572 iops : min= 196, max= 240, avg=217.50, stdev=12.85, samples=20 00:36:09.572 lat (msec) : 10=5.56%, 20=93.06%, 50=0.14%, 100=1.24% 00:36:09.572 cpu : usr=94.05%, sys=5.15%, ctx=49, majf=0, minf=169 00:36:09.572 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.572 issued rwts: total=2177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.572 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:09.572 filename0: (groupid=0, jobs=1): err= 0: pid=3368360: Sat Jul 13 20:23:55 2024 00:36:09.572 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(250MiB/10044msec) 00:36:09.572 slat (nsec): min=4625, max=99776, avg=16582.98, stdev=4122.44 00:36:09.572 clat (usec): min=6315, max=58406, avg=15008.79, stdev=5693.90 00:36:09.572 lat (usec): min=6328, max=58506, avg=15025.37, stdev=5694.19 00:36:09.572 clat percentiles (usec): 00:36:09.572 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[11338], 20.00th=[13304], 00:36:09.572 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[15008], 00:36:09.572 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:36:09.572 | 99.00th=[54789], 99.50th=[56886], 99.90th=[57410], 99.95th=[57934], 00:36:09.572 | 99.99th=[58459] 00:36:09.572 bw ( KiB/s): min=20224, max=28672, per=32.79%, avg=25600.00, stdev=2462.48, samples=20 00:36:09.572 iops : min= 158, max= 224, avg=200.00, stdev=19.24, samples=20 00:36:09.572 lat (msec) : 10=4.50%, 20=93.61%, 50=0.20%, 100=1.70% 00:36:09.572 cpu : usr=94.89%, sys=4.60%, ctx=29, majf=0, minf=191 00:36:09.572 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.572 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.572 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:09.572 filename0: (groupid=0, jobs=1): err= 0: pid=3368361: Sat Jul 13 20:23:55 2024 00:36:09.572 read: IOPS=194, BW=24.3MiB/s (25.4MB/s)(243MiB/10035msec) 00:36:09.572 slat (nsec): min=4683, max=50852, avg=17794.58, stdev=4524.70 00:36:09.572 clat (usec): min=8805, max=59452, avg=15438.64, stdev=6258.96 00:36:09.572 lat (usec): min=8823, max=59472, avg=15456.43, stdev=6259.20 00:36:09.572 clat percentiles (usec): 00:36:09.572 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[11731], 20.00th=[13566], 00:36:09.572 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14877], 60.00th=[15139], 00:36:09.572 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[17171], 00:36:09.572 | 99.00th=[56361], 99.50th=[57410], 99.90th=[59507], 99.95th=[59507], 00:36:09.572 | 99.99th=[59507] 00:36:09.572 bw ( KiB/s): min=20736, max=27904, per=31.88%, avg=24885.75, stdev=1885.56, samples=20 00:36:09.572 iops : min= 162, max= 218, avg=194.40, stdev=14.72, samples=20 00:36:09.572 lat (msec) : 10=2.72%, 20=94.97%, 50=0.21%, 100=2.11% 00:36:09.572 cpu : usr=94.17%, sys=5.35%, ctx=20, majf=0, minf=104 00:36:09.572 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:09.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:09.572 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:09.572 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:09.572 00:36:09.572 Run status group 0 (all jobs): 00:36:09.572 READ: bw=76.2MiB/s (79.9MB/s), 24.3MiB/s-27.2MiB/s (25.4MB/s-28.5MB/s), io=766MiB (803MB), run=10005-10044msec 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.573 00:36:09.573 real 0m11.209s 00:36:09.573 user 0m29.597s 00:36:09.573 sys 0m1.824s 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:09.573 20:23:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:09.573 ************************************ 00:36:09.573 END TEST fio_dif_digest 00:36:09.573 ************************************ 00:36:09.573 20:23:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:09.573 20:23:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:09.573 rmmod nvme_tcp 00:36:09.573 rmmod nvme_fabrics 00:36:09.573 rmmod nvme_keyring 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3362304 ']' 00:36:09.573 20:23:55 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3362304 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 3362304 ']' 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 3362304 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3362304 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3362304' 00:36:09.573 killing process with pid 3362304 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@965 -- # kill 3362304 00:36:09.573 20:23:55 nvmf_dif -- common/autotest_common.sh@970 -- # wait 3362304 00:36:09.573 20:23:56 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:09.573 20:23:56 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:09.573 Waiting for block devices as requested 00:36:09.573 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:09.832 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:09.832 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:09.832 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:10.090 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:10.090 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:10.090 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:10.090 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:10.349 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:10.349 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:10.349 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:10.349 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:10.607 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:10.607 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:10.607 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:10.607 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:10.865 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:10.865 20:23:58 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:10.865 20:23:58 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:10.865 20:23:58 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:10.865 20:23:58 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:10.865 20:23:58 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:10.865 20:23:58 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:10.865 20:23:58 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.399 20:24:00 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:13.399 00:36:13.399 real 1m6.519s 00:36:13.399 user 6m26.084s 00:36:13.399 sys 0m19.313s 00:36:13.399 20:24:00 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:13.399 20:24:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:13.399 ************************************ 00:36:13.399 END TEST nvmf_dif 00:36:13.399 ************************************ 00:36:13.399 20:24:00 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:13.399 20:24:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:13.399 20:24:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:13.399 20:24:00 -- common/autotest_common.sh@10 -- # set +x 00:36:13.399 ************************************ 00:36:13.399 START TEST nvmf_abort_qd_sizes 00:36:13.399 ************************************ 00:36:13.399 20:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:13.399 * Looking for test storage... 00:36:13.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:13.400 20:24:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:14.831 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:14.831 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:14.831 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:14.831 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:14.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:14.832 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:15.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:15.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:36:15.090 00:36:15.090 --- 10.0.0.2 ping statistics --- 00:36:15.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.090 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:15.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:15.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:36:15.090 00:36:15.090 --- 10.0.0.1 ping statistics --- 00:36:15.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:15.090 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:15.090 20:24:02 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:16.031 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:16.031 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:16.031 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:16.031 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:16.290 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:16.290 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:16.290 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:16.290 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:16.290 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:17.226 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3373255 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3373255 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 3373255 ']' 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:17.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:17.226 20:24:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.485 [2024-07-13 20:24:04.908832] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:17.485 [2024-07-13 20:24:04.908912] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:17.485 EAL: No free 2048 kB hugepages reported on node 1 00:36:17.485 [2024-07-13 20:24:04.972070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:17.485 [2024-07-13 20:24:05.061608] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:17.485 [2024-07-13 20:24:05.061678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:17.485 [2024-07-13 20:24:05.061719] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:17.485 [2024-07-13 20:24:05.061731] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:17.485 [2024-07-13 20:24:05.061740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:17.485 [2024-07-13 20:24:05.061837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.485 [2024-07-13 20:24:05.064888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:17.485 [2024-07-13 20:24:05.064958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:17.485 [2024-07-13 20:24:05.064961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:17.743 20:24:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:17.743 ************************************ 00:36:17.743 START TEST spdk_target_abort 00:36:17.743 ************************************ 00:36:17.743 20:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:36:17.743 20:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:17.743 20:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:17.743 20:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.743 20:24:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.025 spdk_targetn1 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.025 [2024-07-13 20:24:08.059795] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:21.025 [2024-07-13 20:24:08.092067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.025 20:24:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.025 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.306 Initializing NVMe Controllers 00:36:24.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.306 Initialization complete. Launching workers. 00:36:24.306 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10669, failed: 0 00:36:24.306 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1255, failed to submit 9414 00:36:24.306 success 848, unsuccess 407, failed 0 00:36:24.306 20:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.306 20:24:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.306 EAL: No free 2048 kB hugepages reported on node 1 00:36:27.584 Initializing NVMe Controllers 00:36:27.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.584 Initialization complete. Launching workers. 00:36:27.584 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8753, failed: 0 00:36:27.584 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1264, failed to submit 7489 00:36:27.584 success 303, unsuccess 961, failed 0 00:36:27.584 20:24:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.584 20:24:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.584 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.864 Initializing NVMe Controllers 00:36:30.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:30.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:30.864 Initialization complete. Launching workers. 00:36:30.864 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31533, failed: 0 00:36:30.864 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2713, failed to submit 28820 00:36:30.864 success 511, unsuccess 2202, failed 0 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:30.864 20:24:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3373255 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 3373255 ']' 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 3373255 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3373255 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3373255' 00:36:31.798 killing process with pid 3373255 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 3373255 00:36:31.798 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 3373255 00:36:32.059 00:36:32.060 real 0m14.293s 00:36:32.060 user 0m54.077s 00:36:32.060 sys 0m2.659s 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:32.060 ************************************ 00:36:32.060 END TEST spdk_target_abort 00:36:32.060 ************************************ 00:36:32.060 20:24:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:32.060 20:24:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:32.060 20:24:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:32.060 20:24:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:32.060 ************************************ 00:36:32.060 START TEST kernel_target_abort 00:36:32.060 ************************************ 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:32.060 20:24:19 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:32.994 Waiting for block devices as requested 00:36:32.994 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:33.254 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:33.254 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:33.537 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:33.537 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:33.537 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:33.537 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:33.537 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:33.796 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:33.796 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:33.796 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:34.055 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:34.055 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:34.055 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:34.055 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:34.313 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:34.313 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:34.313 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:34.571 No valid GPT data, bailing 00:36:34.571 20:24:21 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:34.571 00:36:34.571 Discovery Log Number of Records 2, Generation counter 2 00:36:34.571 =====Discovery Log Entry 0====== 00:36:34.571 trtype: tcp 00:36:34.571 adrfam: ipv4 00:36:34.571 subtype: current discovery subsystem 00:36:34.571 treq: not specified, sq flow control disable supported 00:36:34.571 portid: 1 00:36:34.571 trsvcid: 4420 00:36:34.571 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:34.571 traddr: 10.0.0.1 00:36:34.571 eflags: none 00:36:34.571 sectype: none 00:36:34.571 =====Discovery Log Entry 1====== 00:36:34.571 trtype: tcp 00:36:34.571 adrfam: ipv4 00:36:34.571 subtype: nvme subsystem 00:36:34.571 treq: not specified, sq flow control disable supported 00:36:34.571 portid: 1 00:36:34.571 trsvcid: 4420 00:36:34.571 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:34.571 traddr: 10.0.0.1 00:36:34.571 eflags: none 00:36:34.571 sectype: none 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.571 20:24:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.571 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.907 Initializing NVMe Controllers 00:36:37.907 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.907 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.907 Initialization complete. Launching workers. 00:36:37.907 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29404, failed: 0 00:36:37.907 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29404, failed to submit 0 00:36:37.907 success 0, unsuccess 29404, failed 0 00:36:37.907 20:24:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:37.907 20:24:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:37.907 EAL: No free 2048 kB hugepages reported on node 1 00:36:41.186 Initializing NVMe Controllers 00:36:41.186 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.186 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.186 Initialization complete. Launching workers. 00:36:41.186 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59593, failed: 0 00:36:41.186 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15010, failed to submit 44583 00:36:41.186 success 0, unsuccess 15010, failed 0 00:36:41.186 20:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.186 20:24:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.186 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.460 Initializing NVMe Controllers 00:36:44.460 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.460 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.460 Initialization complete. Launching workers. 00:36:44.460 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58436, failed: 0 00:36:44.460 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14554, failed to submit 43882 00:36:44.460 success 0, unsuccess 14554, failed 0 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:44.460 20:24:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:45.028 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:45.028 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:45.287 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:46.219 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:46.219 00:36:46.219 real 0m14.224s 00:36:46.219 user 0m4.723s 00:36:46.219 sys 0m3.426s 00:36:46.219 20:24:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:46.219 20:24:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.219 ************************************ 00:36:46.219 END TEST kernel_target_abort 00:36:46.219 ************************************ 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:46.219 rmmod nvme_tcp 00:36:46.219 rmmod nvme_fabrics 00:36:46.219 rmmod nvme_keyring 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3373255 ']' 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3373255 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 3373255 ']' 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 3373255 00:36:46.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3373255) - No such process 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 3373255 is not found' 00:36:46.219 Process with pid 3373255 is not found 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:46.219 20:24:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:47.591 Waiting for block devices as requested 00:36:47.591 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:47.591 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:47.591 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:47.849 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:47.849 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:47.849 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:48.107 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:48.107 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:48.107 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:48.107 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:48.365 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:48.365 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:48.365 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:48.365 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:48.623 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:48.623 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:48.623 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:48.882 20:24:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:50.788 20:24:38 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:50.788 00:36:50.788 real 0m37.838s 00:36:50.788 user 1m0.810s 00:36:50.788 sys 0m9.438s 00:36:50.788 20:24:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:50.788 20:24:38 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:50.788 ************************************ 00:36:50.788 END TEST nvmf_abort_qd_sizes 00:36:50.788 ************************************ 00:36:50.788 20:24:38 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:50.788 20:24:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:50.788 20:24:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:50.788 20:24:38 -- common/autotest_common.sh@10 -- # set +x 00:36:50.788 ************************************ 00:36:50.788 START TEST keyring_file 00:36:50.788 ************************************ 00:36:50.788 20:24:38 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:51.046 * Looking for test storage... 00:36:51.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:51.046 20:24:38 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:51.046 20:24:38 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.046 20:24:38 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.047 20:24:38 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.047 20:24:38 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.047 20:24:38 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.047 20:24:38 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.047 20:24:38 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.047 20:24:38 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.047 20:24:38 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:51.047 20:24:38 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5h4QKrVer3 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5h4QKrVer3 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5h4QKrVer3 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5h4QKrVer3 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eiOFRdPxbo 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:51.047 20:24:38 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eiOFRdPxbo 00:36:51.047 20:24:38 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eiOFRdPxbo 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.eiOFRdPxbo 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@30 -- # tgtpid=3379511 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:51.047 20:24:38 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3379511 00:36:51.047 20:24:38 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3379511 ']' 00:36:51.047 20:24:38 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.047 20:24:38 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:51.047 20:24:38 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.047 20:24:38 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:51.047 20:24:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.047 [2024-07-13 20:24:38.608498] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:51.047 [2024-07-13 20:24:38.608587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379511 ] 00:36:51.047 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.047 [2024-07-13 20:24:38.663895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.305 [2024-07-13 20:24:38.753699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:51.562 20:24:39 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.562 [2024-07-13 20:24:39.009567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.562 null0 00:36:51.562 [2024-07-13 20:24:39.041623] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:51.562 [2024-07-13 20:24:39.042111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:51.562 [2024-07-13 20:24:39.049638] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.562 20:24:39 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.562 20:24:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.563 [2024-07-13 20:24:39.057658] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:51.563 request: 00:36:51.563 { 00:36:51.563 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.563 "secure_channel": false, 00:36:51.563 "listen_address": { 00:36:51.563 "trtype": "tcp", 00:36:51.563 "traddr": "127.0.0.1", 00:36:51.563 "trsvcid": "4420" 00:36:51.563 }, 00:36:51.563 "method": "nvmf_subsystem_add_listener", 00:36:51.563 "req_id": 1 00:36:51.563 } 00:36:51.563 Got JSON-RPC error response 00:36:51.563 response: 00:36:51.563 { 00:36:51.563 "code": -32602, 00:36:51.563 "message": "Invalid parameters" 00:36:51.563 } 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:51.563 20:24:39 keyring_file -- keyring/file.sh@46 -- # bperfpid=3379520 00:36:51.563 20:24:39 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:51.563 20:24:39 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3379520 /var/tmp/bperf.sock 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3379520 ']' 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:51.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:51.563 20:24:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:51.563 [2024-07-13 20:24:39.104589] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:36:51.563 [2024-07-13 20:24:39.104665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3379520 ] 00:36:51.563 EAL: No free 2048 kB hugepages reported on node 1 00:36:51.563 [2024-07-13 20:24:39.165236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.821 [2024-07-13 20:24:39.258166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:51.821 20:24:39 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:51.821 20:24:39 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:36:51.821 20:24:39 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:51.821 20:24:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:52.079 20:24:39 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eiOFRdPxbo 00:36:52.079 20:24:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eiOFRdPxbo 00:36:52.338 20:24:39 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:52.338 20:24:39 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:52.338 20:24:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.338 20:24:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.338 20:24:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:52.596 20:24:40 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.5h4QKrVer3 == \/\t\m\p\/\t\m\p\.\5\h\4\Q\K\r\V\e\r\3 ]] 00:36:52.596 20:24:40 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:52.596 20:24:40 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:52.596 20:24:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.596 20:24:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.596 20:24:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:52.855 20:24:40 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eiOFRdPxbo == \/\t\m\p\/\t\m\p\.\e\i\O\F\R\d\P\x\b\o ]] 00:36:52.855 20:24:40 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:52.855 20:24:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:52.855 20:24:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:52.855 20:24:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:52.855 20:24:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:52.855 20:24:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.112 20:24:40 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:53.112 20:24:40 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:53.112 20:24:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.112 20:24:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.112 20:24:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.112 20:24:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.112 20:24:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:53.370 20:24:40 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:53.370 20:24:40 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.370 20:24:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:53.628 [2024-07-13 20:24:41.082675] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:53.628 nvme0n1 00:36:53.629 20:24:41 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:53.629 20:24:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:53.629 20:24:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.629 20:24:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.629 20:24:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.629 20:24:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:53.886 20:24:41 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:53.886 20:24:41 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:53.886 20:24:41 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:53.886 20:24:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:53.886 20:24:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:53.886 20:24:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:53.886 20:24:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:54.144 20:24:41 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:54.144 20:24:41 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:54.144 Running I/O for 1 seconds... 00:36:55.570 00:36:55.570 Latency(us) 00:36:55.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:55.570 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:55.570 nvme0n1 : 1.03 4598.51 17.96 0.00 0.00 27480.21 10145.94 43302.31 00:36:55.570 =================================================================================================================== 00:36:55.570 Total : 4598.51 17.96 0.00 0.00 27480.21 10145.94 43302.31 00:36:55.570 0 00:36:55.570 20:24:42 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:55.570 20:24:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:55.570 20:24:43 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:55.570 20:24:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:55.570 20:24:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.570 20:24:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.570 20:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.570 20:24:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:55.827 20:24:43 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:55.827 20:24:43 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:55.827 20:24:43 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:55.827 20:24:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:55.827 20:24:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:55.828 20:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:55.828 20:24:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:56.085 20:24:43 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:56.085 20:24:43 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:56.085 20:24:43 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.085 20:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:56.343 [2024-07-13 20:24:43.784428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1290310 (107): Transport endpoint is not connected 00:36:56.343 [2024-07-13 20:24:43.784434] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:56.343 [2024-07-13 20:24:43.785415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1290310 (9): Bad file descriptor 00:36:56.343 [2024-07-13 20:24:43.786413] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:56.343 [2024-07-13 20:24:43.786436] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:56.343 [2024-07-13 20:24:43.786452] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:56.343 request: 00:36:56.343 { 00:36:56.343 "name": "nvme0", 00:36:56.343 "trtype": "tcp", 00:36:56.343 "traddr": "127.0.0.1", 00:36:56.343 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.343 "adrfam": "ipv4", 00:36:56.343 "trsvcid": "4420", 00:36:56.343 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.343 "psk": "key1", 00:36:56.343 "method": "bdev_nvme_attach_controller", 00:36:56.343 "req_id": 1 00:36:56.343 } 00:36:56.343 Got JSON-RPC error response 00:36:56.343 response: 00:36:56.343 { 00:36:56.343 "code": -5, 00:36:56.343 "message": "Input/output error" 00:36:56.343 } 00:36:56.343 20:24:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:56.343 20:24:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:56.343 20:24:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:56.343 20:24:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:56.343 20:24:43 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:56.343 20:24:43 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:56.343 20:24:43 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.343 20:24:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.343 20:24:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:56.343 20:24:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.599 20:24:44 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:56.599 20:24:44 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:56.599 20:24:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:56.599 20:24:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:56.599 20:24:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:56.599 20:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.599 20:24:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:56.857 20:24:44 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:56.857 20:24:44 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:56.857 20:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:57.114 20:24:44 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:57.114 20:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:57.372 20:24:44 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:57.372 20:24:44 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:57.372 20:24:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:57.630 20:24:45 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:57.630 20:24:45 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.5h4QKrVer3 00:36:57.630 20:24:45 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.630 20:24:45 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:57.630 20:24:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:57.888 [2024-07-13 20:24:45.289415] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5h4QKrVer3': 0100660 00:36:57.888 [2024-07-13 20:24:45.289455] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:57.888 request: 00:36:57.888 { 00:36:57.888 "name": "key0", 00:36:57.888 "path": "/tmp/tmp.5h4QKrVer3", 00:36:57.888 "method": "keyring_file_add_key", 00:36:57.888 "req_id": 1 00:36:57.888 } 00:36:57.888 Got JSON-RPC error response 00:36:57.888 response: 00:36:57.888 { 00:36:57.888 "code": -1, 00:36:57.888 "message": "Operation not permitted" 00:36:57.888 } 00:36:57.888 20:24:45 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:57.888 20:24:45 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:57.888 20:24:45 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:57.888 20:24:45 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:57.888 20:24:45 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.5h4QKrVer3 00:36:57.888 20:24:45 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:57.888 20:24:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5h4QKrVer3 00:36:58.145 20:24:45 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.5h4QKrVer3 00:36:58.145 20:24:45 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:58.146 20:24:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:58.146 20:24:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:58.146 20:24:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.146 20:24:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.146 20:24:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:58.402 20:24:45 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:58.402 20:24:45 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.402 20:24:45 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:58.402 20:24:45 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.402 20:24:45 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:58.402 20:24:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.402 20:24:45 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:58.402 20:24:45 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:58.403 20:24:45 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.403 20:24:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:58.403 [2024-07-13 20:24:46.019378] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5h4QKrVer3': No such file or directory 00:36:58.403 [2024-07-13 20:24:46.019418] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:58.403 [2024-07-13 20:24:46.019450] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:58.403 [2024-07-13 20:24:46.019463] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:58.403 [2024-07-13 20:24:46.019477] bdev_nvme.c:6269:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:58.403 request: 00:36:58.403 { 00:36:58.403 "name": "nvme0", 00:36:58.403 "trtype": "tcp", 00:36:58.403 "traddr": "127.0.0.1", 00:36:58.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:58.403 "adrfam": "ipv4", 00:36:58.403 "trsvcid": "4420", 00:36:58.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:58.403 "psk": "key0", 00:36:58.403 "method": "bdev_nvme_attach_controller", 00:36:58.403 "req_id": 1 00:36:58.403 } 00:36:58.403 Got JSON-RPC error response 00:36:58.403 response: 00:36:58.403 { 00:36:58.403 "code": -19, 00:36:58.403 "message": "No such device" 00:36:58.403 } 00:36:58.403 20:24:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:58.403 20:24:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:58.403 20:24:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:58.403 20:24:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:58.403 20:24:46 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:58.403 20:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:58.660 20:24:46 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FnFrL0m2do 00:36:58.660 20:24:46 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:58.660 20:24:46 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:58.660 20:24:46 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:58.660 20:24:46 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:58.660 20:24:46 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:58.660 20:24:46 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:58.660 20:24:46 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:58.918 20:24:46 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FnFrL0m2do 00:36:58.918 20:24:46 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FnFrL0m2do 00:36:58.918 20:24:46 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.FnFrL0m2do 00:36:58.918 20:24:46 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FnFrL0m2do 00:36:58.918 20:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FnFrL0m2do 00:36:59.177 20:24:46 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.177 20:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.435 nvme0n1 00:36:59.435 20:24:46 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:59.435 20:24:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:59.435 20:24:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.435 20:24:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.435 20:24:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.435 20:24:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.694 20:24:47 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:59.694 20:24:47 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:59.694 20:24:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:59.952 20:24:47 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:59.952 20:24:47 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:59.952 20:24:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.952 20:24:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.952 20:24:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.210 20:24:47 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:00.210 20:24:47 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:00.210 20:24:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:00.210 20:24:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.210 20:24:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.210 20:24:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.210 20:24:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.467 20:24:47 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:00.467 20:24:47 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:00.467 20:24:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:00.724 20:24:48 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:00.724 20:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.724 20:24:48 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:00.982 20:24:48 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:00.982 20:24:48 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FnFrL0m2do 00:37:00.982 20:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FnFrL0m2do 00:37:01.240 20:24:48 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.eiOFRdPxbo 00:37:01.240 20:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.eiOFRdPxbo 00:37:01.240 20:24:48 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.240 20:24:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.805 nvme0n1 00:37:01.805 20:24:49 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:01.805 20:24:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:02.063 20:24:49 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:02.063 "subsystems": [ 00:37:02.063 { 00:37:02.063 "subsystem": "keyring", 00:37:02.063 "config": [ 00:37:02.063 { 00:37:02.063 "method": "keyring_file_add_key", 00:37:02.063 "params": { 00:37:02.063 "name": "key0", 00:37:02.063 "path": "/tmp/tmp.FnFrL0m2do" 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "keyring_file_add_key", 00:37:02.063 "params": { 00:37:02.063 "name": "key1", 00:37:02.063 "path": "/tmp/tmp.eiOFRdPxbo" 00:37:02.063 } 00:37:02.063 } 00:37:02.063 ] 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "subsystem": "iobuf", 00:37:02.063 "config": [ 00:37:02.063 { 00:37:02.063 "method": "iobuf_set_options", 00:37:02.063 "params": { 00:37:02.063 "small_pool_count": 8192, 00:37:02.063 "large_pool_count": 1024, 00:37:02.063 "small_bufsize": 8192, 00:37:02.063 "large_bufsize": 135168 00:37:02.063 } 00:37:02.063 } 00:37:02.063 ] 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "subsystem": "sock", 00:37:02.063 "config": [ 00:37:02.063 { 00:37:02.063 "method": "sock_set_default_impl", 00:37:02.063 "params": { 00:37:02.063 "impl_name": "posix" 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "sock_impl_set_options", 00:37:02.063 "params": { 00:37:02.063 "impl_name": "ssl", 00:37:02.063 "recv_buf_size": 4096, 00:37:02.063 "send_buf_size": 4096, 00:37:02.063 "enable_recv_pipe": true, 00:37:02.063 "enable_quickack": false, 00:37:02.063 "enable_placement_id": 0, 00:37:02.063 "enable_zerocopy_send_server": true, 00:37:02.063 "enable_zerocopy_send_client": false, 00:37:02.063 "zerocopy_threshold": 0, 00:37:02.063 "tls_version": 0, 00:37:02.063 "enable_ktls": false 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "sock_impl_set_options", 00:37:02.063 "params": { 00:37:02.063 "impl_name": "posix", 00:37:02.063 "recv_buf_size": 2097152, 00:37:02.063 "send_buf_size": 2097152, 00:37:02.063 "enable_recv_pipe": true, 00:37:02.063 "enable_quickack": false, 00:37:02.063 "enable_placement_id": 0, 00:37:02.063 "enable_zerocopy_send_server": true, 00:37:02.063 "enable_zerocopy_send_client": false, 00:37:02.063 "zerocopy_threshold": 0, 00:37:02.063 "tls_version": 0, 00:37:02.063 "enable_ktls": false 00:37:02.063 } 00:37:02.063 } 00:37:02.063 ] 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "subsystem": "vmd", 00:37:02.063 "config": [] 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "subsystem": "accel", 00:37:02.063 "config": [ 00:37:02.063 { 00:37:02.063 "method": "accel_set_options", 00:37:02.063 "params": { 00:37:02.063 "small_cache_size": 128, 00:37:02.063 "large_cache_size": 16, 00:37:02.063 "task_count": 2048, 00:37:02.063 "sequence_count": 2048, 00:37:02.063 "buf_count": 2048 00:37:02.063 } 00:37:02.063 } 00:37:02.063 ] 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "subsystem": "bdev", 00:37:02.063 "config": [ 00:37:02.063 { 00:37:02.063 "method": "bdev_set_options", 00:37:02.063 "params": { 00:37:02.063 "bdev_io_pool_size": 65535, 00:37:02.063 "bdev_io_cache_size": 256, 00:37:02.063 "bdev_auto_examine": true, 00:37:02.063 "iobuf_small_cache_size": 128, 00:37:02.063 "iobuf_large_cache_size": 16 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "bdev_raid_set_options", 00:37:02.063 "params": { 00:37:02.063 "process_window_size_kb": 1024 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "bdev_iscsi_set_options", 00:37:02.063 "params": { 00:37:02.063 "timeout_sec": 30 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "bdev_nvme_set_options", 00:37:02.063 "params": { 00:37:02.063 "action_on_timeout": "none", 00:37:02.063 "timeout_us": 0, 00:37:02.063 "timeout_admin_us": 0, 00:37:02.063 "keep_alive_timeout_ms": 10000, 00:37:02.063 "arbitration_burst": 0, 00:37:02.063 "low_priority_weight": 0, 00:37:02.063 "medium_priority_weight": 0, 00:37:02.063 "high_priority_weight": 0, 00:37:02.063 "nvme_adminq_poll_period_us": 10000, 00:37:02.063 "nvme_ioq_poll_period_us": 0, 00:37:02.063 "io_queue_requests": 512, 00:37:02.063 "delay_cmd_submit": true, 00:37:02.063 "transport_retry_count": 4, 00:37:02.063 "bdev_retry_count": 3, 00:37:02.063 "transport_ack_timeout": 0, 00:37:02.063 "ctrlr_loss_timeout_sec": 0, 00:37:02.063 "reconnect_delay_sec": 0, 00:37:02.063 "fast_io_fail_timeout_sec": 0, 00:37:02.063 "disable_auto_failback": false, 00:37:02.063 "generate_uuids": false, 00:37:02.063 "transport_tos": 0, 00:37:02.063 "nvme_error_stat": false, 00:37:02.063 "rdma_srq_size": 0, 00:37:02.063 "io_path_stat": false, 00:37:02.063 "allow_accel_sequence": false, 00:37:02.063 "rdma_max_cq_size": 0, 00:37:02.063 "rdma_cm_event_timeout_ms": 0, 00:37:02.063 "dhchap_digests": [ 00:37:02.063 "sha256", 00:37:02.063 "sha384", 00:37:02.063 "sha512" 00:37:02.063 ], 00:37:02.063 "dhchap_dhgroups": [ 00:37:02.063 "null", 00:37:02.063 "ffdhe2048", 00:37:02.063 "ffdhe3072", 00:37:02.063 "ffdhe4096", 00:37:02.063 "ffdhe6144", 00:37:02.063 "ffdhe8192" 00:37:02.063 ] 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "bdev_nvme_attach_controller", 00:37:02.063 "params": { 00:37:02.063 "name": "nvme0", 00:37:02.063 "trtype": "TCP", 00:37:02.063 "adrfam": "IPv4", 00:37:02.063 "traddr": "127.0.0.1", 00:37:02.063 "trsvcid": "4420", 00:37:02.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.063 "prchk_reftag": false, 00:37:02.063 "prchk_guard": false, 00:37:02.063 "ctrlr_loss_timeout_sec": 0, 00:37:02.063 "reconnect_delay_sec": 0, 00:37:02.063 "fast_io_fail_timeout_sec": 0, 00:37:02.063 "psk": "key0", 00:37:02.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.063 "hdgst": false, 00:37:02.063 "ddgst": false 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "bdev_nvme_set_hotplug", 00:37:02.063 "params": { 00:37:02.063 "period_us": 100000, 00:37:02.063 "enable": false 00:37:02.063 } 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "method": "bdev_wait_for_examine" 00:37:02.063 } 00:37:02.063 ] 00:37:02.063 }, 00:37:02.063 { 00:37:02.063 "subsystem": "nbd", 00:37:02.063 "config": [] 00:37:02.063 } 00:37:02.063 ] 00:37:02.063 }' 00:37:02.063 20:24:49 keyring_file -- keyring/file.sh@114 -- # killprocess 3379520 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3379520 ']' 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3379520 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3379520 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3379520' 00:37:02.063 killing process with pid 3379520 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@965 -- # kill 3379520 00:37:02.063 Received shutdown signal, test time was about 1.000000 seconds 00:37:02.063 00:37:02.063 Latency(us) 00:37:02.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:02.063 =================================================================================================================== 00:37:02.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:02.063 20:24:49 keyring_file -- common/autotest_common.sh@970 -- # wait 3379520 00:37:02.321 20:24:49 keyring_file -- keyring/file.sh@117 -- # bperfpid=3380974 00:37:02.321 20:24:49 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3380974 /var/tmp/bperf.sock 00:37:02.321 20:24:49 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 3380974 ']' 00:37:02.321 20:24:49 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:02.321 20:24:49 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:02.321 20:24:49 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:02.321 20:24:49 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:02.321 20:24:49 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:02.321 "subsystems": [ 00:37:02.321 { 00:37:02.321 "subsystem": "keyring", 00:37:02.321 "config": [ 00:37:02.321 { 00:37:02.321 "method": "keyring_file_add_key", 00:37:02.321 "params": { 00:37:02.321 "name": "key0", 00:37:02.321 "path": "/tmp/tmp.FnFrL0m2do" 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "keyring_file_add_key", 00:37:02.321 "params": { 00:37:02.321 "name": "key1", 00:37:02.321 "path": "/tmp/tmp.eiOFRdPxbo" 00:37:02.321 } 00:37:02.321 } 00:37:02.321 ] 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "subsystem": "iobuf", 00:37:02.321 "config": [ 00:37:02.321 { 00:37:02.321 "method": "iobuf_set_options", 00:37:02.321 "params": { 00:37:02.321 "small_pool_count": 8192, 00:37:02.321 "large_pool_count": 1024, 00:37:02.321 "small_bufsize": 8192, 00:37:02.321 "large_bufsize": 135168 00:37:02.321 } 00:37:02.321 } 00:37:02.321 ] 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "subsystem": "sock", 00:37:02.321 "config": [ 00:37:02.321 { 00:37:02.321 "method": "sock_set_default_impl", 00:37:02.321 "params": { 00:37:02.321 "impl_name": "posix" 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "sock_impl_set_options", 00:37:02.321 "params": { 00:37:02.321 "impl_name": "ssl", 00:37:02.321 "recv_buf_size": 4096, 00:37:02.321 "send_buf_size": 4096, 00:37:02.321 "enable_recv_pipe": true, 00:37:02.321 "enable_quickack": false, 00:37:02.321 "enable_placement_id": 0, 00:37:02.321 "enable_zerocopy_send_server": true, 00:37:02.321 "enable_zerocopy_send_client": false, 00:37:02.321 "zerocopy_threshold": 0, 00:37:02.321 "tls_version": 0, 00:37:02.321 "enable_ktls": false 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "sock_impl_set_options", 00:37:02.321 "params": { 00:37:02.321 "impl_name": "posix", 00:37:02.321 "recv_buf_size": 2097152, 00:37:02.321 "send_buf_size": 2097152, 00:37:02.321 "enable_recv_pipe": true, 00:37:02.321 "enable_quickack": false, 00:37:02.321 "enable_placement_id": 0, 00:37:02.321 "enable_zerocopy_send_server": true, 00:37:02.321 "enable_zerocopy_send_client": false, 00:37:02.321 "zerocopy_threshold": 0, 00:37:02.321 "tls_version": 0, 00:37:02.321 "enable_ktls": false 00:37:02.321 } 00:37:02.321 } 00:37:02.321 ] 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "subsystem": "vmd", 00:37:02.321 "config": [] 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "subsystem": "accel", 00:37:02.321 "config": [ 00:37:02.321 { 00:37:02.321 "method": "accel_set_options", 00:37:02.321 "params": { 00:37:02.321 "small_cache_size": 128, 00:37:02.321 "large_cache_size": 16, 00:37:02.321 "task_count": 2048, 00:37:02.321 "sequence_count": 2048, 00:37:02.321 "buf_count": 2048 00:37:02.321 } 00:37:02.321 } 00:37:02.321 ] 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "subsystem": "bdev", 00:37:02.321 "config": [ 00:37:02.321 { 00:37:02.321 "method": "bdev_set_options", 00:37:02.321 "params": { 00:37:02.321 "bdev_io_pool_size": 65535, 00:37:02.321 "bdev_io_cache_size": 256, 00:37:02.321 "bdev_auto_examine": true, 00:37:02.321 "iobuf_small_cache_size": 128, 00:37:02.321 "iobuf_large_cache_size": 16 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "bdev_raid_set_options", 00:37:02.321 "params": { 00:37:02.321 "process_window_size_kb": 1024 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "bdev_iscsi_set_options", 00:37:02.321 "params": { 00:37:02.321 "timeout_sec": 30 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "bdev_nvme_set_options", 00:37:02.321 "params": { 00:37:02.321 "action_on_timeout": "none", 00:37:02.321 "timeout_us": 0, 00:37:02.321 "timeout_admin_us": 0, 00:37:02.321 "keep_alive_timeout_ms": 10000, 00:37:02.321 "arbitration_burst": 0, 00:37:02.321 "low_priority_weight": 0, 00:37:02.321 "medium_priority_weight": 0, 00:37:02.321 "high_priority_weight": 0, 00:37:02.321 "nvme_adminq_poll_period_us": 10000, 00:37:02.321 "nvme_ioq_poll_period_us": 0, 00:37:02.321 "io_queue_requests": 512, 00:37:02.321 "delay_cmd_submit": true, 00:37:02.321 "transport_retry_count": 4, 00:37:02.321 "bdev_retry_count": 3, 00:37:02.321 "transport_ack_timeout": 0, 00:37:02.321 "ctrlr_loss_timeout_sec": 0, 00:37:02.321 "reconnect_delay_sec": 0, 00:37:02.321 "fast_io_fail_timeout_sec": 0, 00:37:02.321 "disable_auto_failback": false, 00:37:02.321 "generate_uuids": false, 00:37:02.321 "transport_tos": 0, 00:37:02.321 "nvme_error_stat": false, 00:37:02.321 "rdma_srq_size": 0, 00:37:02.321 "io_path_stat": false, 00:37:02.321 "allow_accel_sequence": false, 00:37:02.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:02.321 "rdma_max_cq_size": 0, 00:37:02.321 "rdma_cm_event_timeout_ms": 0, 00:37:02.321 "dhchap_digests": [ 00:37:02.321 "sha256", 00:37:02.321 "sha384", 00:37:02.321 "sha512" 00:37:02.321 ], 00:37:02.321 "dhchap_dhgroups": [ 00:37:02.321 "null", 00:37:02.321 "ffdhe2048", 00:37:02.321 "ffdhe3072", 00:37:02.321 "ffdhe4096", 00:37:02.321 "ffdhe6144", 00:37:02.321 "ffdhe8192" 00:37:02.321 ] 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "bdev_nvme_attach_controller", 00:37:02.321 "params": { 00:37:02.321 "name": "nvme0", 00:37:02.321 "trtype": "TCP", 00:37:02.321 "adrfam": "IPv4", 00:37:02.321 "traddr": "127.0.0.1", 00:37:02.321 "trsvcid": "4420", 00:37:02.321 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.321 "prchk_reftag": false, 00:37:02.321 "prchk_guard": false, 00:37:02.321 "ctrlr_loss_timeout_sec": 0, 00:37:02.321 "reconnect_delay_sec": 0, 00:37:02.321 "fast_io_fail_timeout_sec": 0, 00:37:02.321 "psk": "key0", 00:37:02.321 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.321 "hdgst": false, 00:37:02.321 "ddgst": false 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "bdev_nvme_set_hotplug", 00:37:02.321 "params": { 00:37:02.321 "period_us": 100000, 00:37:02.321 "enable": false 00:37:02.321 } 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "method": "bdev_wait_for_examine" 00:37:02.321 } 00:37:02.321 ] 00:37:02.321 }, 00:37:02.321 { 00:37:02.321 "subsystem": "nbd", 00:37:02.321 "config": [] 00:37:02.321 } 00:37:02.321 ] 00:37:02.321 }' 00:37:02.321 20:24:49 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:02.321 20:24:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:02.321 [2024-07-13 20:24:49.795781] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:02.321 [2024-07-13 20:24:49.795877] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380974 ] 00:37:02.321 EAL: No free 2048 kB hugepages reported on node 1 00:37:02.321 [2024-07-13 20:24:49.852531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.321 [2024-07-13 20:24:49.941582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:02.579 [2024-07-13 20:24:50.129605] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:03.143 20:24:50 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:03.143 20:24:50 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:03.143 20:24:50 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:03.143 20:24:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.143 20:24:50 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:03.402 20:24:50 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:03.402 20:24:50 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:03.402 20:24:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:03.402 20:24:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.402 20:24:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.402 20:24:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.402 20:24:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:03.660 20:24:51 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:03.660 20:24:51 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:03.660 20:24:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:03.660 20:24:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.660 20:24:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.660 20:24:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.660 20:24:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:03.917 20:24:51 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:03.917 20:24:51 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:03.917 20:24:51 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:03.917 20:24:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:04.175 20:24:51 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:04.175 20:24:51 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:04.175 20:24:51 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FnFrL0m2do /tmp/tmp.eiOFRdPxbo 00:37:04.175 20:24:51 keyring_file -- keyring/file.sh@20 -- # killprocess 3380974 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3380974 ']' 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3380974 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3380974 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3380974' 00:37:04.175 killing process with pid 3380974 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@965 -- # kill 3380974 00:37:04.175 Received shutdown signal, test time was about 1.000000 seconds 00:37:04.175 00:37:04.175 Latency(us) 00:37:04.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:04.175 =================================================================================================================== 00:37:04.175 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:04.175 20:24:51 keyring_file -- common/autotest_common.sh@970 -- # wait 3380974 00:37:04.433 20:24:51 keyring_file -- keyring/file.sh@21 -- # killprocess 3379511 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 3379511 ']' 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@950 -- # kill -0 3379511 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3379511 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3379511' 00:37:04.433 killing process with pid 3379511 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@965 -- # kill 3379511 00:37:04.433 [2024-07-13 20:24:51.979627] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:04.433 20:24:51 keyring_file -- common/autotest_common.sh@970 -- # wait 3379511 00:37:05.001 00:37:05.001 real 0m13.958s 00:37:05.001 user 0m34.351s 00:37:05.001 sys 0m3.348s 00:37:05.001 20:24:52 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:05.001 20:24:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:05.001 ************************************ 00:37:05.001 END TEST keyring_file 00:37:05.001 ************************************ 00:37:05.002 20:24:52 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:05.002 20:24:52 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:05.002 20:24:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:05.002 20:24:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:05.002 20:24:52 -- common/autotest_common.sh@10 -- # set +x 00:37:05.002 ************************************ 00:37:05.002 START TEST keyring_linux 00:37:05.002 ************************************ 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:05.002 * Looking for test storage... 00:37:05.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:05.002 20:24:52 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.002 20:24:52 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.002 20:24:52 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.002 20:24:52 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.002 20:24:52 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.002 20:24:52 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.002 20:24:52 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:05.002 20:24:52 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:05.002 /tmp/:spdk-test:key0 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:05.002 20:24:52 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:05.002 20:24:52 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:05.002 /tmp/:spdk-test:key1 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3381333 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:05.002 20:24:52 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3381333 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3381333 ']' 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:05.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:05.002 20:24:52 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:05.002 [2024-07-13 20:24:52.621275] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:05.002 [2024-07-13 20:24:52.621368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381333 ] 00:37:05.002 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.261 [2024-07-13 20:24:52.680792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.261 [2024-07-13 20:24:52.770124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:05.517 20:24:53 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:05.517 [2024-07-13 20:24:53.025395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.517 null0 00:37:05.517 [2024-07-13 20:24:53.057447] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:05.517 [2024-07-13 20:24:53.057952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.517 20:24:53 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:05.517 671761199 00:37:05.517 20:24:53 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:05.517 442501135 00:37:05.517 20:24:53 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3381461 00:37:05.517 20:24:53 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:05.517 20:24:53 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3381461 /var/tmp/bperf.sock 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@827 -- # '[' -z 3381461 ']' 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:05.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:05.517 20:24:53 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:05.517 [2024-07-13 20:24:53.122748] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 22.11.4 initialization... 00:37:05.517 [2024-07-13 20:24:53.122825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381461 ] 00:37:05.517 EAL: No free 2048 kB hugepages reported on node 1 00:37:05.774 [2024-07-13 20:24:53.180100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:05.774 [2024-07-13 20:24:53.268517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.774 20:24:53 keyring_linux -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:05.774 20:24:53 keyring_linux -- common/autotest_common.sh@860 -- # return 0 00:37:05.774 20:24:53 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:05.774 20:24:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:06.030 20:24:53 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:06.030 20:24:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:06.288 20:24:53 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:06.288 20:24:53 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:06.545 [2024-07-13 20:24:54.129325] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:06.802 nvme0n1 00:37:06.802 20:24:54 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:06.802 20:24:54 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:06.802 20:24:54 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:06.802 20:24:54 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:06.802 20:24:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.803 20:24:54 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:07.059 20:24:54 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:07.059 20:24:54 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:07.059 20:24:54 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:07.059 20:24:54 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:07.060 20:24:54 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.060 20:24:54 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.060 20:24:54 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:07.060 20:24:54 keyring_linux -- keyring/linux.sh@25 -- # sn=671761199 00:37:07.060 20:24:54 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:07.317 20:24:54 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:07.317 20:24:54 keyring_linux -- keyring/linux.sh@26 -- # [[ 671761199 == \6\7\1\7\6\1\1\9\9 ]] 00:37:07.317 20:24:54 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 671761199 00:37:07.317 20:24:54 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:07.317 20:24:54 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:07.317 Running I/O for 1 seconds... 00:37:08.252 00:37:08.252 Latency(us) 00:37:08.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.252 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:08.252 nvme0n1 : 1.03 3725.47 14.55 0.00 0.00 33947.03 9951.76 46020.84 00:37:08.252 =================================================================================================================== 00:37:08.252 Total : 3725.47 14.55 0.00 0.00 33947.03 9951.76 46020.84 00:37:08.252 0 00:37:08.252 20:24:55 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:08.252 20:24:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:08.510 20:24:56 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:08.510 20:24:56 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:08.510 20:24:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:08.510 20:24:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:08.510 20:24:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:08.510 20:24:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.812 20:24:56 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:08.812 20:24:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:08.812 20:24:56 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:08.812 20:24:56 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:08.812 20:24:56 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:08.812 20:24:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:09.072 [2024-07-13 20:24:56.608697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:09.072 [2024-07-13 20:24:56.609575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd4270 (107): Transport endpoint is not connected 00:37:09.072 [2024-07-13 20:24:56.610563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd4270 (9): Bad file descriptor 00:37:09.072 [2024-07-13 20:24:56.611562] nvme_ctrlr.c:4042:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:09.072 [2024-07-13 20:24:56.611587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:09.072 [2024-07-13 20:24:56.611603] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:09.072 request: 00:37:09.072 { 00:37:09.072 "name": "nvme0", 00:37:09.072 "trtype": "tcp", 00:37:09.072 "traddr": "127.0.0.1", 00:37:09.072 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.072 "adrfam": "ipv4", 00:37:09.072 "trsvcid": "4420", 00:37:09.072 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.072 "psk": ":spdk-test:key1", 00:37:09.072 "method": "bdev_nvme_attach_controller", 00:37:09.072 "req_id": 1 00:37:09.072 } 00:37:09.072 Got JSON-RPC error response 00:37:09.072 response: 00:37:09.072 { 00:37:09.072 "code": -5, 00:37:09.072 "message": "Input/output error" 00:37:09.072 } 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@33 -- # sn=671761199 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 671761199 00:37:09.072 1 links removed 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@33 -- # sn=442501135 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 442501135 00:37:09.072 1 links removed 00:37:09.072 20:24:56 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3381461 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3381461 ']' 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3381461 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3381461 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3381461' 00:37:09.072 killing process with pid 3381461 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@965 -- # kill 3381461 00:37:09.072 Received shutdown signal, test time was about 1.000000 seconds 00:37:09.072 00:37:09.072 Latency(us) 00:37:09.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.072 =================================================================================================================== 00:37:09.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:09.072 20:24:56 keyring_linux -- common/autotest_common.sh@970 -- # wait 3381461 00:37:09.333 20:24:56 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3381333 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@946 -- # '[' -z 3381333 ']' 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@950 -- # kill -0 3381333 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@951 -- # uname 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3381333 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3381333' 00:37:09.333 killing process with pid 3381333 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@965 -- # kill 3381333 00:37:09.333 20:24:56 keyring_linux -- common/autotest_common.sh@970 -- # wait 3381333 00:37:09.901 00:37:09.901 real 0m4.930s 00:37:09.901 user 0m9.209s 00:37:09.901 sys 0m1.479s 00:37:09.901 20:24:57 keyring_linux -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:09.901 20:24:57 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:09.901 ************************************ 00:37:09.901 END TEST keyring_linux 00:37:09.901 ************************************ 00:37:09.901 20:24:57 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:09.901 20:24:57 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:09.901 20:24:57 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:09.901 20:24:57 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:09.901 20:24:57 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:09.901 20:24:57 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:09.901 20:24:57 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:09.901 20:24:57 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:09.901 20:24:57 -- common/autotest_common.sh@10 -- # set +x 00:37:09.901 20:24:57 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:09.901 20:24:57 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:09.901 20:24:57 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:09.901 20:24:57 -- common/autotest_common.sh@10 -- # set +x 00:37:11.803 INFO: APP EXITING 00:37:11.803 INFO: killing all VMs 00:37:11.803 INFO: killing vhost app 00:37:11.803 INFO: EXIT DONE 00:37:12.735 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:12.735 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:12.735 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:12.735 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:12.735 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:12.735 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:12.735 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:12.735 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:12.735 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:12.735 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:12.735 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:12.735 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:12.735 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:12.735 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:12.735 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:12.735 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:12.735 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:14.111 Cleaning 00:37:14.111 Removing: /var/run/dpdk/spdk0/config 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:14.111 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:14.111 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:14.111 Removing: /var/run/dpdk/spdk1/config 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:14.111 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:14.111 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:14.111 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:14.111 Removing: /var/run/dpdk/spdk2/config 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:14.111 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:14.111 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:14.111 Removing: /var/run/dpdk/spdk3/config 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:14.111 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:14.111 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:14.111 Removing: /var/run/dpdk/spdk4/config 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:14.111 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:14.111 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:14.111 Removing: /dev/shm/bdev_svc_trace.1 00:37:14.111 Removing: /dev/shm/nvmf_trace.0 00:37:14.111 Removing: /dev/shm/spdk_tgt_trace.pid3061632 00:37:14.111 Removing: /var/run/dpdk/spdk0 00:37:14.111 Removing: /var/run/dpdk/spdk1 00:37:14.111 Removing: /var/run/dpdk/spdk2 00:37:14.111 Removing: /var/run/dpdk/spdk3 00:37:14.111 Removing: /var/run/dpdk/spdk4 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3060086 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3060816 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3061632 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3062069 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3062770 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3062910 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3063628 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3063637 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3063878 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3065077 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3065985 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3066290 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3066475 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3066712 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3066957 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3067149 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3067305 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3067548 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3068567 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3070916 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3071084 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3071244 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3071366 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3071679 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3071799 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3072114 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3072118 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3072402 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3072422 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3072586 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3072706 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3073083 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3073237 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3073430 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3073598 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3073740 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3073811 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3074036 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3074236 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3074403 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3074556 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3074830 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3074981 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3075143 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3075301 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3075569 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3075727 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3075893 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3076115 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3076318 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3076471 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3076638 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3076904 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3077066 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3077228 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3077417 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3077656 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3077732 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3077940 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3080111 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3133604 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3136152 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3143039 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3146268 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3148620 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3149138 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3156274 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3156276 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3156929 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3157605 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3158232 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3159052 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3159244 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3159403 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3159533 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3159543 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3160196 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3160733 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3161386 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3161793 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3161796 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3162067 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3162917 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3163664 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3169017 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3169174 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3171674 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3175374 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3177423 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3183790 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3188980 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3190290 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3191459 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3201596 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3203722 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3228994 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3231774 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3232950 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3234260 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3234284 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3234421 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3234560 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3234871 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3236182 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3236905 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3237244 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3238934 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3239248 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3239808 00:37:14.111 Removing: /var/run/dpdk/spdk_pid3242200 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3245452 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3249594 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3272483 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3275742 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3279488 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3280438 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3281489 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3284076 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3286310 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3290528 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3290530 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3293294 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3293429 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3293569 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3293831 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3293911 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3295037 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3296215 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3297390 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3298566 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3299749 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3300929 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3304726 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3305080 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3306571 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3307516 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3311398 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3313381 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3316668 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3319985 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3326194 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3330653 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3330659 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3343337 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3343747 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3344155 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3344560 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3345135 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3345559 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3346071 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3346478 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3348860 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3349118 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3352898 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3352951 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3354555 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3359570 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3359585 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3362472 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3363758 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3365151 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3366013 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3367427 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3368176 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3374084 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3374451 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3374841 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3376400 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3376798 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3377076 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3379511 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3379520 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3380974 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3381333 00:37:14.371 Removing: /var/run/dpdk/spdk_pid3381461 00:37:14.371 Clean 00:37:14.371 20:25:02 -- common/autotest_common.sh@1447 -- # return 0 00:37:14.371 20:25:02 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:14.371 20:25:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.371 20:25:02 -- common/autotest_common.sh@10 -- # set +x 00:37:14.371 20:25:02 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:14.371 20:25:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.371 20:25:02 -- common/autotest_common.sh@10 -- # set +x 00:37:14.630 20:25:02 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:14.630 20:25:02 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:14.630 20:25:02 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:14.630 20:25:02 -- spdk/autotest.sh@391 -- # hash lcov 00:37:14.630 20:25:02 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:14.630 20:25:02 -- spdk/autotest.sh@393 -- # hostname 00:37:14.630 20:25:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:14.630 geninfo: WARNING: invalid characters removed from testname! 00:37:46.688 20:25:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:46.688 20:25:34 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:49.969 20:25:37 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:52.498 20:25:40 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.809 20:25:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:58.332 20:25:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.610 20:25:48 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:01.610 20:25:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:01.610 20:25:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:01.610 20:25:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.610 20:25:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.610 20:25:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.610 20:25:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.610 20:25:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.610 20:25:48 -- paths/export.sh@5 -- $ export PATH 00:38:01.610 20:25:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:01.610 20:25:48 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:01.610 20:25:48 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:01.610 20:25:48 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1720895148.XXXXXX 00:38:01.610 20:25:48 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1720895148.NsNmgt 00:38:01.610 20:25:48 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:01.610 20:25:48 -- common/autobuild_common.sh@443 -- $ '[' -n v22.11.4 ']' 00:38:01.610 20:25:48 -- common/autobuild_common.sh@444 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:01.610 20:25:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:01.610 20:25:48 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:01.610 20:25:48 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:01.610 20:25:48 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:01.610 20:25:48 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:01.610 20:25:48 -- common/autotest_common.sh@10 -- $ set +x 00:38:01.610 20:25:48 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:01.610 20:25:48 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:01.610 20:25:48 -- pm/common@17 -- $ local monitor 00:38:01.610 20:25:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.610 20:25:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.610 20:25:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.610 20:25:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:01.610 20:25:48 -- pm/common@21 -- $ date +%s 00:38:01.610 20:25:48 -- pm/common@25 -- $ sleep 1 00:38:01.610 20:25:48 -- pm/common@21 -- $ date +%s 00:38:01.610 20:25:48 -- pm/common@21 -- $ date +%s 00:38:01.610 20:25:48 -- pm/common@21 -- $ date +%s 00:38:01.611 20:25:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720895148 00:38:01.611 20:25:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720895148 00:38:01.611 20:25:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720895148 00:38:01.611 20:25:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720895148 00:38:01.611 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720895148_collect-vmstat.pm.log 00:38:01.611 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720895148_collect-cpu-temp.pm.log 00:38:01.611 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720895148_collect-cpu-load.pm.log 00:38:01.611 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720895148_collect-bmc-pm.bmc.pm.log 00:38:02.200 20:25:49 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:02.200 20:25:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:02.200 20:25:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.200 20:25:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:02.200 20:25:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:02.200 20:25:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:02.200 20:25:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:02.200 20:25:49 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:02.200 20:25:49 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:02.201 20:25:49 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:02.461 20:25:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:02.461 20:25:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:02.461 20:25:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:02.461 20:25:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:02.461 20:25:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.461 20:25:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:02.461 20:25:49 -- pm/common@44 -- $ pid=3392575 00:38:02.461 20:25:49 -- pm/common@50 -- $ kill -TERM 3392575 00:38:02.461 20:25:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.461 20:25:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:02.461 20:25:49 -- pm/common@44 -- $ pid=3392577 00:38:02.461 20:25:49 -- pm/common@50 -- $ kill -TERM 3392577 00:38:02.461 20:25:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.461 20:25:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:02.461 20:25:49 -- pm/common@44 -- $ pid=3392579 00:38:02.461 20:25:49 -- pm/common@50 -- $ kill -TERM 3392579 00:38:02.461 20:25:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:02.461 20:25:49 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:02.461 20:25:49 -- pm/common@44 -- $ pid=3392604 00:38:02.461 20:25:49 -- pm/common@50 -- $ sudo -E kill -TERM 3392604 00:38:02.461 + [[ -n 2956472 ]] 00:38:02.461 + sudo kill 2956472 00:38:02.470 [Pipeline] } 00:38:02.486 [Pipeline] // stage 00:38:02.490 [Pipeline] } 00:38:02.504 [Pipeline] // timeout 00:38:02.508 [Pipeline] } 00:38:02.523 [Pipeline] // catchError 00:38:02.527 [Pipeline] } 00:38:02.543 [Pipeline] // wrap 00:38:02.547 [Pipeline] } 00:38:02.561 [Pipeline] // catchError 00:38:02.568 [Pipeline] stage 00:38:02.570 [Pipeline] { (Epilogue) 00:38:02.584 [Pipeline] catchError 00:38:02.586 [Pipeline] { 00:38:02.599 [Pipeline] echo 00:38:02.601 Cleanup processes 00:38:02.608 [Pipeline] sh 00:38:02.894 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.894 3392712 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:02.894 3392840 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:02.910 [Pipeline] sh 00:38:03.195 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:03.195 ++ grep -v 'sudo pgrep' 00:38:03.195 ++ awk '{print $1}' 00:38:03.195 + sudo kill -9 3392712 00:38:03.207 [Pipeline] sh 00:38:03.489 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:13.480 [Pipeline] sh 00:38:13.766 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:13.766 Artifacts sizes are good 00:38:13.788 [Pipeline] archiveArtifacts 00:38:13.822 Archiving artifacts 00:38:14.062 [Pipeline] sh 00:38:14.345 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:14.361 [Pipeline] cleanWs 00:38:14.371 [WS-CLEANUP] Deleting project workspace... 00:38:14.371 [WS-CLEANUP] Deferred wipeout is used... 00:38:14.377 [WS-CLEANUP] done 00:38:14.379 [Pipeline] } 00:38:14.399 [Pipeline] // catchError 00:38:14.411 [Pipeline] sh 00:38:14.694 + logger -p user.info -t JENKINS-CI 00:38:14.705 [Pipeline] } 00:38:14.722 [Pipeline] // stage 00:38:14.728 [Pipeline] } 00:38:14.747 [Pipeline] // node 00:38:14.753 [Pipeline] End of Pipeline 00:38:14.793 Finished: SUCCESS